00:00:00.000 Started by upstream project "autotest-per-patch" build number 132321 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.086 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.104 Using shallow fetch with depth 1 00:00:00.104 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.104 > git --version # timeout=10 00:00:00.124 > git --version # 'git version 2.39.2' 00:00:00.124 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.147 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.147 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.646 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.657 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.669 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.669 > git config core.sparsecheckout # timeout=10 00:00:05.679 > git read-tree -mu HEAD # timeout=10 00:00:05.695 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.721 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.722 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.846 [Pipeline] Start of Pipeline 00:00:05.859 [Pipeline] library 00:00:05.861 Loading library shm_lib@master 00:00:05.861 Library shm_lib@master is cached. Copying from home. 00:00:05.873 [Pipeline] node 00:00:05.891 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.893 [Pipeline] { 00:00:05.903 [Pipeline] catchError 00:00:05.904 [Pipeline] { 00:00:05.914 [Pipeline] wrap 00:00:05.921 [Pipeline] { 00:00:05.928 [Pipeline] stage 00:00:05.929 [Pipeline] { (Prologue) 00:00:06.115 [Pipeline] sh 00:00:06.397 + logger -p user.info -t JENKINS-CI 00:00:06.415 [Pipeline] echo 00:00:06.416 Node: CYP12 00:00:06.423 [Pipeline] sh 00:00:06.759 [Pipeline] setCustomBuildProperty 00:00:06.768 [Pipeline] echo 00:00:06.769 Cleanup processes 00:00:06.773 [Pipeline] sh 00:00:07.057 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.058 3789987 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.069 [Pipeline] sh 00:00:07.353 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.353 ++ grep -v 'sudo pgrep' 00:00:07.353 ++ awk '{print $1}' 00:00:07.353 + sudo kill -9 00:00:07.353 + true 00:00:07.369 [Pipeline] cleanWs 00:00:07.378 [WS-CLEANUP] Deleting project workspace... 00:00:07.378 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.384 [WS-CLEANUP] done 00:00:07.387 [Pipeline] setCustomBuildProperty 00:00:07.400 [Pipeline] sh 00:00:07.683 + sudo git config --global --replace-all safe.directory '*' 00:00:07.780 [Pipeline] httpRequest 00:00:08.471 [Pipeline] echo 00:00:08.472 Sorcerer 10.211.164.20 is alive 00:00:08.480 [Pipeline] retry 00:00:08.483 [Pipeline] { 00:00:08.497 [Pipeline] httpRequest 00:00:08.502 HttpMethod: GET 00:00:08.502 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.503 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.521 Response Code: HTTP/1.1 200 OK 00:00:08.521 Success: Status code 200 is in the accepted range: 200,404 00:00:08.522 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.497 [Pipeline] } 00:00:17.517 [Pipeline] // retry 00:00:17.525 [Pipeline] sh 00:00:17.813 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.829 [Pipeline] httpRequest 00:00:18.303 [Pipeline] echo 00:00:18.305 Sorcerer 10.211.164.20 is alive 00:00:18.315 [Pipeline] retry 00:00:18.317 [Pipeline] { 00:00:18.332 [Pipeline] httpRequest 00:00:18.337 HttpMethod: GET 00:00:18.338 URL: http://10.211.164.20/packages/spdk_029355612402fa1c2771cfe324ea86d10877f1b5.tar.gz 00:00:18.338 Sending request to url: http://10.211.164.20/packages/spdk_029355612402fa1c2771cfe324ea86d10877f1b5.tar.gz 00:00:18.345 Response Code: HTTP/1.1 200 OK 00:00:18.345 Success: Status code 200 is in the accepted range: 200,404 00:00:18.346 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_029355612402fa1c2771cfe324ea86d10877f1b5.tar.gz 00:02:08.971 [Pipeline] } 00:02:08.987 [Pipeline] // retry 00:02:08.995 [Pipeline] sh 00:02:09.287 + tar --no-same-owner -xf spdk_029355612402fa1c2771cfe324ea86d10877f1b5.tar.gz 00:02:12.605 [Pipeline] sh 00:02:12.895 + git -C spdk log --oneline -n5 00:02:12.895 029355612 bdev_ut: add manual examine bdev unit test case 00:02:12.895 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:02:12.895 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:02:12.895 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:02:12.895 03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:02:12.908 [Pipeline] } 00:02:12.923 [Pipeline] // stage 00:02:12.935 [Pipeline] stage 00:02:12.937 [Pipeline] { (Prepare) 00:02:12.951 [Pipeline] writeFile 00:02:12.962 [Pipeline] sh 00:02:13.246 + logger -p user.info -t JENKINS-CI 00:02:13.262 [Pipeline] sh 00:02:13.558 + logger -p user.info -t JENKINS-CI 00:02:13.573 [Pipeline] sh 00:02:13.867 + cat autorun-spdk.conf 00:02:13.867 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.867 SPDK_TEST_NVMF=1 00:02:13.867 SPDK_TEST_NVME_CLI=1 00:02:13.867 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.867 SPDK_TEST_NVMF_NICS=e810 00:02:13.867 SPDK_TEST_VFIOUSER=1 00:02:13.867 SPDK_RUN_UBSAN=1 00:02:13.867 NET_TYPE=phy 00:02:13.876 RUN_NIGHTLY=0 00:02:13.880 [Pipeline] readFile 00:02:13.904 [Pipeline] withEnv 00:02:13.906 [Pipeline] { 00:02:13.918 [Pipeline] sh 00:02:14.209 + set -ex 00:02:14.209 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:14.209 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:14.209 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.209 ++ SPDK_TEST_NVMF=1 00:02:14.209 ++ SPDK_TEST_NVME_CLI=1 00:02:14.209 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.209 ++ SPDK_TEST_NVMF_NICS=e810 00:02:14.209 ++ SPDK_TEST_VFIOUSER=1 00:02:14.209 ++ SPDK_RUN_UBSAN=1 00:02:14.209 ++ NET_TYPE=phy 00:02:14.209 ++ RUN_NIGHTLY=0 00:02:14.209 + case $SPDK_TEST_NVMF_NICS in 00:02:14.209 + DRIVERS=ice 00:02:14.209 + [[ tcp == \r\d\m\a ]] 00:02:14.209 + [[ -n ice ]] 00:02:14.209 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:14.209 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:14.209 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:14.209 rmmod: ERROR: Module irdma is not currently loaded 00:02:14.209 rmmod: ERROR: Module i40iw is not currently loaded 00:02:14.209 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:14.209 + true 00:02:14.209 + for D in $DRIVERS 00:02:14.209 + sudo modprobe ice 00:02:14.209 + exit 0 00:02:14.220 [Pipeline] } 00:02:14.236 [Pipeline] // withEnv 00:02:14.241 [Pipeline] } 00:02:14.254 [Pipeline] // stage 00:02:14.264 [Pipeline] catchError 00:02:14.265 [Pipeline] { 00:02:14.278 [Pipeline] timeout 00:02:14.278 Timeout set to expire in 1 hr 0 min 00:02:14.279 [Pipeline] { 00:02:14.292 [Pipeline] stage 00:02:14.294 [Pipeline] { (Tests) 00:02:14.308 [Pipeline] sh 00:02:14.598 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.598 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.598 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.598 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:14.598 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.598 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:14.598 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:14.598 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:14.598 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:14.598 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:14.598 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:14.598 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.598 + source /etc/os-release 00:02:14.598 ++ NAME='Fedora Linux' 00:02:14.598 ++ VERSION='39 (Cloud Edition)' 00:02:14.598 ++ ID=fedora 00:02:14.598 ++ VERSION_ID=39 00:02:14.598 ++ VERSION_CODENAME= 00:02:14.598 ++ PLATFORM_ID=platform:f39 00:02:14.598 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.598 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.598 ++ LOGO=fedora-logo-icon 00:02:14.598 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.598 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.598 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.598 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.598 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.598 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.598 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.598 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.598 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.598 ++ SUPPORT_END=2024-11-12 00:02:14.598 ++ VARIANT='Cloud Edition' 00:02:14.598 ++ VARIANT_ID=cloud 00:02:14.598 + uname -a 00:02:14.598 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.598 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:17.903 Hugepages 00:02:17.903 node hugesize free / total 00:02:17.903 node0 1048576kB 0 / 0 00:02:17.903 node0 2048kB 0 / 0 00:02:17.903 node1 1048576kB 0 / 0 00:02:17.903 node1 2048kB 0 / 0 00:02:17.903 00:02:17.903 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.903 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:17.903 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:17.903 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:17.903 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:17.903 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:17.903 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:17.903 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:17.903 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:17.903 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:17.903 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:17.903 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:17.903 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:17.903 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:17.903 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:17.903 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:17.903 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:17.903 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:17.903 + rm -f /tmp/spdk-ld-path 00:02:17.903 + source autorun-spdk.conf 00:02:17.904 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.904 ++ SPDK_TEST_NVMF=1 00:02:17.904 ++ SPDK_TEST_NVME_CLI=1 00:02:17.904 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:17.904 ++ SPDK_TEST_NVMF_NICS=e810 00:02:17.904 ++ SPDK_TEST_VFIOUSER=1 00:02:17.904 ++ SPDK_RUN_UBSAN=1 00:02:17.904 ++ NET_TYPE=phy 00:02:17.904 ++ RUN_NIGHTLY=0 00:02:17.904 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:17.904 + [[ -n '' ]] 00:02:17.904 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.904 + for M in /var/spdk/build-*-manifest.txt 00:02:17.904 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:17.904 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:17.904 + for M in /var/spdk/build-*-manifest.txt 00:02:17.904 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:17.904 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:18.166 + for M in /var/spdk/build-*-manifest.txt 00:02:18.166 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:18.166 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:18.166 ++ uname 00:02:18.166 + [[ Linux == \L\i\n\u\x ]] 00:02:18.166 + sudo dmesg -T 00:02:18.166 + sudo dmesg --clear 00:02:18.166 + dmesg_pid=3791108 00:02:18.166 + [[ Fedora Linux == FreeBSD ]] 00:02:18.166 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.166 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.166 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:18.166 + [[ -x /usr/src/fio-static/fio ]] 00:02:18.166 + export FIO_BIN=/usr/src/fio-static/fio 00:02:18.166 + FIO_BIN=/usr/src/fio-static/fio 00:02:18.166 + sudo dmesg -Tw 00:02:18.166 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:18.166 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:18.166 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:18.166 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.166 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.166 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:18.166 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.166 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.166 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.166 10:56:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:18.166 10:56:26 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:18.166 10:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:18.166 10:56:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:18.166 10:56:26 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.166 10:56:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:18.166 10:56:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:18.166 10:56:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:18.166 10:56:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:18.166 10:56:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.166 10:56:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.166 10:56:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.166 10:56:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.166 10:56:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.166 10:56:26 -- paths/export.sh@5 -- $ export PATH 00:02:18.166 10:56:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.166 10:56:26 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:18.166 10:56:26 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:18.166 10:56:26 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732010186.XXXXXX 00:02:18.166 10:56:26 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732010186.aFwTkC 00:02:18.166 10:56:26 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:18.166 10:56:26 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:18.166 10:56:26 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:18.166 10:56:26 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:18.166 10:56:26 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:18.166 10:56:26 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:18.166 10:56:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:18.166 10:56:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.428 10:56:26 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:18.428 10:56:26 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:18.428 10:56:26 -- pm/common@17 -- $ local monitor 00:02:18.428 10:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.428 10:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.428 10:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.428 10:56:26 -- pm/common@21 -- $ date +%s 00:02:18.428 10:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.428 10:56:26 -- pm/common@25 -- $ sleep 1 00:02:18.428 10:56:26 -- pm/common@21 -- $ date +%s 00:02:18.428 10:56:26 -- pm/common@21 -- $ date +%s 00:02:18.428 10:56:26 -- pm/common@21 -- $ date +%s 00:02:18.428 10:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732010186 00:02:18.428 10:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732010186 00:02:18.428 10:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732010186 00:02:18.428 10:56:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732010186 00:02:18.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732010186_collect-cpu-temp.pm.log 00:02:18.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732010186_collect-cpu-load.pm.log 00:02:18.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732010186_collect-vmstat.pm.log 00:02:18.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732010186_collect-bmc-pm.bmc.pm.log 00:02:19.370 10:56:27 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:19.370 10:56:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:19.370 10:56:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:19.370 10:56:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.370 10:56:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:19.370 Tue Nov 19 09:56:27 AM UTC 2024 00:02:19.370 10:56:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:19.370 v25.01-pre-195-g029355612 00:02:19.370 10:56:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:19.370 10:56:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:19.370 10:56:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:19.370 10:56:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.370 10:56:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.370 10:56:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.370 ************************************ 00:02:19.370 START TEST ubsan 00:02:19.370 ************************************ 00:02:19.370 10:56:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:19.370 using ubsan 00:02:19.370 00:02:19.370 real 0m0.001s 00:02:19.370 user 0m0.000s 00:02:19.370 sys 0m0.000s 00:02:19.370 10:56:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:19.370 10:56:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.370 ************************************ 00:02:19.370 END TEST ubsan 00:02:19.370 ************************************ 00:02:19.370 10:56:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:19.370 10:56:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:19.370 10:56:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:19.370 10:56:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:19.370 10:56:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:19.370 10:56:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:19.370 10:56:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:19.370 10:56:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:19.370 10:56:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:19.630 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:19.630 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:19.907 Using 'verbs' RDMA provider 00:02:35.789 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:48.024 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:48.024 Creating mk/config.mk...done. 00:02:48.024 Creating mk/cc.flags.mk...done. 00:02:48.024 Type 'make' to build. 00:02:48.024 10:56:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:48.024 10:56:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:48.024 10:56:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:48.024 10:56:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.024 ************************************ 00:02:48.024 START TEST make 00:02:48.024 ************************************ 00:02:48.024 10:56:56 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:48.598 make[1]: Nothing to be done for 'all'. 00:02:49.544 The Meson build system 00:02:49.545 Version: 1.5.0 00:02:49.545 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:49.545 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.545 Build type: native build 00:02:49.545 Project name: libvfio-user 00:02:49.545 Project version: 0.0.1 00:02:49.545 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.545 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.545 Host machine cpu family: x86_64 00:02:49.545 Host machine cpu: x86_64 00:02:49.545 Run-time dependency threads found: YES 00:02:49.545 Library dl found: YES 00:02:49.545 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.545 Run-time dependency json-c found: YES 0.17 00:02:49.545 Run-time dependency cmocka found: YES 1.1.7 00:02:49.545 Program pytest-3 found: NO 00:02:49.545 Program flake8 found: NO 00:02:49.545 Program misspell-fixer found: NO 00:02:49.545 Program restructuredtext-lint found: NO 00:02:49.545 Program valgrind found: YES (/usr/bin/valgrind) 00:02:49.545 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.545 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.545 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.545 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.545 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:49.545 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:49.545 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:49.545 Build targets in project: 8 00:02:49.545 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:49.545 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:49.545 00:02:49.545 libvfio-user 0.0.1 00:02:49.545 00:02:49.545 User defined options 00:02:49.545 buildtype : debug 00:02:49.545 default_library: shared 00:02:49.545 libdir : /usr/local/lib 00:02:49.545 00:02:49.545 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.113 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:50.113 [1/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:50.113 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:50.113 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:50.113 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:50.113 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:50.113 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:50.113 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:50.113 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:50.113 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:50.113 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:50.113 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:50.113 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:50.113 [13/37] Compiling C object samples/null.p/null.c.o 00:02:50.113 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:50.113 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:50.113 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:50.113 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:50.113 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:50.113 [19/37] Compiling C object samples/server.p/server.c.o 00:02:50.113 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:50.113 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:50.113 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:50.113 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:50.113 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:50.113 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:50.113 [26/37] Compiling C object samples/client.p/client.c.o 00:02:50.373 [27/37] Linking target samples/client 00:02:50.373 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:50.373 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:50.373 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:50.373 [31/37] Linking target test/unit_tests 00:02:50.373 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:50.373 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:50.373 [34/37] Linking target samples/null 00:02:50.373 [35/37] Linking target samples/server 00:02:50.373 [36/37] Linking target samples/lspci 00:02:50.635 [37/37] Linking target samples/gpio-pci-idio-16 00:02:50.635 INFO: autodetecting backend as ninja 00:02:50.635 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:50.635 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:50.897 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:50.897 ninja: no work to do. 00:02:57.494 The Meson build system 00:02:57.494 Version: 1.5.0 00:02:57.494 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:57.494 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:57.494 Build type: native build 00:02:57.494 Program cat found: YES (/usr/bin/cat) 00:02:57.494 Project name: DPDK 00:02:57.494 Project version: 24.03.0 00:02:57.494 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:57.494 C linker for the host machine: cc ld.bfd 2.40-14 00:02:57.494 Host machine cpu family: x86_64 00:02:57.494 Host machine cpu: x86_64 00:02:57.494 Message: ## Building in Developer Mode ## 00:02:57.494 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:57.494 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:57.494 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:57.494 Program python3 found: YES (/usr/bin/python3) 00:02:57.494 Program cat found: YES (/usr/bin/cat) 00:02:57.494 Compiler for C supports arguments -march=native: YES 00:02:57.494 Checking for size of "void *" : 8 00:02:57.494 Checking for size of "void *" : 8 (cached) 00:02:57.494 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:57.494 Library m found: YES 00:02:57.494 Library numa found: YES 00:02:57.494 Has header "numaif.h" : YES 00:02:57.494 Library fdt found: NO 00:02:57.494 Library execinfo found: NO 00:02:57.494 Has header "execinfo.h" : YES 00:02:57.494 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:57.494 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:57.494 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:57.494 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:57.494 Run-time dependency openssl found: YES 3.1.1 00:02:57.494 Run-time dependency libpcap found: YES 1.10.4 00:02:57.494 Has header "pcap.h" with dependency libpcap: YES 00:02:57.494 Compiler for C supports arguments -Wcast-qual: YES 00:02:57.494 Compiler for C supports arguments -Wdeprecated: YES 00:02:57.494 Compiler for C supports arguments -Wformat: YES 00:02:57.494 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:57.494 Compiler for C supports arguments -Wformat-security: NO 00:02:57.494 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.494 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:57.494 Compiler for C supports arguments -Wnested-externs: YES 00:02:57.494 Compiler for C supports arguments -Wold-style-definition: YES 00:02:57.494 Compiler for C supports arguments -Wpointer-arith: YES 00:02:57.494 Compiler for C supports arguments -Wsign-compare: YES 00:02:57.494 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:57.494 Compiler for C supports arguments -Wundef: YES 00:02:57.494 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.494 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:57.494 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:57.494 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.494 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:57.494 Program objdump found: YES (/usr/bin/objdump) 00:02:57.494 Compiler for C supports arguments -mavx512f: YES 00:02:57.494 Checking if "AVX512 checking" compiles: YES 00:02:57.494 Fetching value of define "__SSE4_2__" : 1 00:02:57.494 Fetching value of define "__AES__" : 1 00:02:57.494 Fetching value of define "__AVX__" : 1 00:02:57.494 Fetching value of define "__AVX2__" : 1 00:02:57.494 Fetching value of define "__AVX512BW__" : 1 00:02:57.494 Fetching value of define "__AVX512CD__" : 1 00:02:57.494 Fetching value of define "__AVX512DQ__" : 1 00:02:57.494 Fetching value of define "__AVX512F__" : 1 00:02:57.494 Fetching value of define "__AVX512VL__" : 1 00:02:57.494 Fetching value of define "__PCLMUL__" : 1 00:02:57.494 Fetching value of define "__RDRND__" : 1 00:02:57.494 Fetching value of define "__RDSEED__" : 1 00:02:57.494 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:57.494 Fetching value of define "__znver1__" : (undefined) 00:02:57.494 Fetching value of define "__znver2__" : (undefined) 00:02:57.494 Fetching value of define "__znver3__" : (undefined) 00:02:57.494 Fetching value of define "__znver4__" : (undefined) 00:02:57.494 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:57.494 Message: lib/log: Defining dependency "log" 00:02:57.494 Message: lib/kvargs: Defining dependency "kvargs" 00:02:57.494 Message: lib/telemetry: Defining dependency "telemetry" 00:02:57.494 Checking for function "getentropy" : NO 00:02:57.494 Message: lib/eal: Defining dependency "eal" 00:02:57.494 Message: lib/ring: Defining dependency "ring" 00:02:57.494 Message: lib/rcu: Defining dependency "rcu" 00:02:57.494 Message: lib/mempool: Defining dependency "mempool" 00:02:57.494 Message: lib/mbuf: Defining dependency "mbuf" 00:02:57.494 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:57.494 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:57.494 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:57.494 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:57.494 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:57.494 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:57.494 Compiler for C supports arguments -mpclmul: YES 00:02:57.494 Compiler for C supports arguments -maes: YES 00:02:57.494 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:57.494 Compiler for C supports arguments -mavx512bw: YES 00:02:57.494 Compiler for C supports arguments -mavx512dq: YES 00:02:57.494 Compiler for C supports arguments -mavx512vl: YES 00:02:57.494 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:57.494 Compiler for C supports arguments -mavx2: YES 00:02:57.494 Compiler for C supports arguments -mavx: YES 00:02:57.494 Message: lib/net: Defining dependency "net" 00:02:57.494 Message: lib/meter: Defining dependency "meter" 00:02:57.494 Message: lib/ethdev: Defining dependency "ethdev" 00:02:57.494 Message: lib/pci: Defining dependency "pci" 00:02:57.494 Message: lib/cmdline: Defining dependency "cmdline" 00:02:57.494 Message: lib/hash: Defining dependency "hash" 00:02:57.494 Message: lib/timer: Defining dependency "timer" 00:02:57.494 Message: lib/compressdev: Defining dependency "compressdev" 00:02:57.494 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:57.494 Message: lib/dmadev: Defining dependency "dmadev" 00:02:57.494 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:57.494 Message: lib/power: Defining dependency "power" 00:02:57.494 Message: lib/reorder: Defining dependency "reorder" 00:02:57.494 Message: lib/security: Defining dependency "security" 00:02:57.494 Has header "linux/userfaultfd.h" : YES 00:02:57.494 Has header "linux/vduse.h" : YES 00:02:57.494 Message: lib/vhost: Defining dependency "vhost" 00:02:57.494 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:57.494 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:57.494 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:57.494 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:57.494 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:57.494 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:57.494 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:57.494 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:57.494 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:57.494 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:57.494 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:57.494 Configuring doxy-api-html.conf using configuration 00:02:57.494 Configuring doxy-api-man.conf using configuration 00:02:57.494 Program mandb found: YES (/usr/bin/mandb) 00:02:57.494 Program sphinx-build found: NO 00:02:57.494 Configuring rte_build_config.h using configuration 00:02:57.494 Message: 00:02:57.494 ================= 00:02:57.494 Applications Enabled 00:02:57.494 ================= 00:02:57.494 00:02:57.494 apps: 00:02:57.494 00:02:57.494 00:02:57.494 Message: 00:02:57.494 ================= 00:02:57.494 Libraries Enabled 00:02:57.494 ================= 00:02:57.494 00:02:57.494 libs: 00:02:57.494 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:57.494 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:57.494 cryptodev, dmadev, power, reorder, security, vhost, 00:02:57.494 00:02:57.494 Message: 00:02:57.494 =============== 00:02:57.494 Drivers Enabled 00:02:57.494 =============== 00:02:57.494 00:02:57.494 common: 00:02:57.494 00:02:57.494 bus: 00:02:57.494 pci, vdev, 00:02:57.494 mempool: 00:02:57.494 ring, 00:02:57.494 dma: 00:02:57.494 00:02:57.494 net: 00:02:57.494 00:02:57.494 crypto: 00:02:57.494 00:02:57.494 compress: 00:02:57.494 00:02:57.494 vdpa: 00:02:57.494 00:02:57.494 00:02:57.494 Message: 00:02:57.494 ================= 00:02:57.494 Content Skipped 00:02:57.494 ================= 00:02:57.494 00:02:57.494 apps: 00:02:57.494 dumpcap: explicitly disabled via build config 00:02:57.494 graph: explicitly disabled via build config 00:02:57.494 pdump: explicitly disabled via build config 00:02:57.495 proc-info: explicitly disabled via build config 00:02:57.495 test-acl: explicitly disabled via build config 00:02:57.495 test-bbdev: explicitly disabled via build config 00:02:57.495 test-cmdline: explicitly disabled via build config 00:02:57.495 test-compress-perf: explicitly disabled via build config 00:02:57.495 test-crypto-perf: explicitly disabled via build config 00:02:57.495 test-dma-perf: explicitly disabled via build config 00:02:57.495 test-eventdev: explicitly disabled via build config 00:02:57.495 test-fib: explicitly disabled via build config 00:02:57.495 test-flow-perf: explicitly disabled via build config 00:02:57.495 test-gpudev: explicitly disabled via build config 00:02:57.495 test-mldev: explicitly disabled via build config 00:02:57.495 test-pipeline: explicitly disabled via build config 00:02:57.495 test-pmd: explicitly disabled via build config 00:02:57.495 test-regex: explicitly disabled via build config 00:02:57.495 test-sad: explicitly disabled via build config 00:02:57.495 test-security-perf: explicitly disabled via build config 00:02:57.495 00:02:57.495 libs: 00:02:57.495 argparse: explicitly disabled via build config 00:02:57.495 metrics: explicitly disabled via build config 00:02:57.495 acl: explicitly disabled via build config 00:02:57.495 bbdev: explicitly disabled via build config 00:02:57.495 bitratestats: explicitly disabled via build config 00:02:57.495 bpf: explicitly disabled via build config 00:02:57.495 cfgfile: explicitly disabled via build config 00:02:57.495 distributor: explicitly disabled via build config 00:02:57.495 efd: explicitly disabled via build config 00:02:57.495 eventdev: explicitly disabled via build config 00:02:57.495 dispatcher: explicitly disabled via build config 00:02:57.495 gpudev: explicitly disabled via build config 00:02:57.495 gro: explicitly disabled via build config 00:02:57.495 gso: explicitly disabled via build config 00:02:57.495 ip_frag: explicitly disabled via build config 00:02:57.495 jobstats: explicitly disabled via build config 00:02:57.495 latencystats: explicitly disabled via build config 00:02:57.495 lpm: explicitly disabled via build config 00:02:57.495 member: explicitly disabled via build config 00:02:57.495 pcapng: explicitly disabled via build config 00:02:57.495 rawdev: explicitly disabled via build config 00:02:57.495 regexdev: explicitly disabled via build config 00:02:57.495 mldev: explicitly disabled via build config 00:02:57.495 rib: explicitly disabled via build config 00:02:57.495 sched: explicitly disabled via build config 00:02:57.495 stack: explicitly disabled via build config 00:02:57.495 ipsec: explicitly disabled via build config 00:02:57.495 pdcp: explicitly disabled via build config 00:02:57.495 fib: explicitly disabled via build config 00:02:57.495 port: explicitly disabled via build config 00:02:57.495 pdump: explicitly disabled via build config 00:02:57.495 table: explicitly disabled via build config 00:02:57.495 pipeline: explicitly disabled via build config 00:02:57.495 graph: explicitly disabled via build config 00:02:57.495 node: explicitly disabled via build config 00:02:57.495 00:02:57.495 drivers: 00:02:57.495 common/cpt: not in enabled drivers build config 00:02:57.495 common/dpaax: not in enabled drivers build config 00:02:57.495 common/iavf: not in enabled drivers build config 00:02:57.495 common/idpf: not in enabled drivers build config 00:02:57.495 common/ionic: not in enabled drivers build config 00:02:57.495 common/mvep: not in enabled drivers build config 00:02:57.495 common/octeontx: not in enabled drivers build config 00:02:57.495 bus/auxiliary: not in enabled drivers build config 00:02:57.495 bus/cdx: not in enabled drivers build config 00:02:57.495 bus/dpaa: not in enabled drivers build config 00:02:57.495 bus/fslmc: not in enabled drivers build config 00:02:57.495 bus/ifpga: not in enabled drivers build config 00:02:57.495 bus/platform: not in enabled drivers build config 00:02:57.495 bus/uacce: not in enabled drivers build config 00:02:57.495 bus/vmbus: not in enabled drivers build config 00:02:57.495 common/cnxk: not in enabled drivers build config 00:02:57.495 common/mlx5: not in enabled drivers build config 00:02:57.495 common/nfp: not in enabled drivers build config 00:02:57.495 common/nitrox: not in enabled drivers build config 00:02:57.495 common/qat: not in enabled drivers build config 00:02:57.495 common/sfc_efx: not in enabled drivers build config 00:02:57.495 mempool/bucket: not in enabled drivers build config 00:02:57.495 mempool/cnxk: not in enabled drivers build config 00:02:57.495 mempool/dpaa: not in enabled drivers build config 00:02:57.495 mempool/dpaa2: not in enabled drivers build config 00:02:57.495 mempool/octeontx: not in enabled drivers build config 00:02:57.495 mempool/stack: not in enabled drivers build config 00:02:57.495 dma/cnxk: not in enabled drivers build config 00:02:57.495 dma/dpaa: not in enabled drivers build config 00:02:57.495 dma/dpaa2: not in enabled drivers build config 00:02:57.495 dma/hisilicon: not in enabled drivers build config 00:02:57.495 dma/idxd: not in enabled drivers build config 00:02:57.495 dma/ioat: not in enabled drivers build config 00:02:57.495 dma/skeleton: not in enabled drivers build config 00:02:57.495 net/af_packet: not in enabled drivers build config 00:02:57.495 net/af_xdp: not in enabled drivers build config 00:02:57.495 net/ark: not in enabled drivers build config 00:02:57.495 net/atlantic: not in enabled drivers build config 00:02:57.495 net/avp: not in enabled drivers build config 00:02:57.495 net/axgbe: not in enabled drivers build config 00:02:57.495 net/bnx2x: not in enabled drivers build config 00:02:57.495 net/bnxt: not in enabled drivers build config 00:02:57.495 net/bonding: not in enabled drivers build config 00:02:57.495 net/cnxk: not in enabled drivers build config 00:02:57.495 net/cpfl: not in enabled drivers build config 00:02:57.495 net/cxgbe: not in enabled drivers build config 00:02:57.495 net/dpaa: not in enabled drivers build config 00:02:57.495 net/dpaa2: not in enabled drivers build config 00:02:57.495 net/e1000: not in enabled drivers build config 00:02:57.495 net/ena: not in enabled drivers build config 00:02:57.495 net/enetc: not in enabled drivers build config 00:02:57.495 net/enetfec: not in enabled drivers build config 00:02:57.495 net/enic: not in enabled drivers build config 00:02:57.495 net/failsafe: not in enabled drivers build config 00:02:57.495 net/fm10k: not in enabled drivers build config 00:02:57.495 net/gve: not in enabled drivers build config 00:02:57.495 net/hinic: not in enabled drivers build config 00:02:57.495 net/hns3: not in enabled drivers build config 00:02:57.495 net/i40e: not in enabled drivers build config 00:02:57.495 net/iavf: not in enabled drivers build config 00:02:57.495 net/ice: not in enabled drivers build config 00:02:57.495 net/idpf: not in enabled drivers build config 00:02:57.495 net/igc: not in enabled drivers build config 00:02:57.495 net/ionic: not in enabled drivers build config 00:02:57.495 net/ipn3ke: not in enabled drivers build config 00:02:57.495 net/ixgbe: not in enabled drivers build config 00:02:57.495 net/mana: not in enabled drivers build config 00:02:57.495 net/memif: not in enabled drivers build config 00:02:57.495 net/mlx4: not in enabled drivers build config 00:02:57.495 net/mlx5: not in enabled drivers build config 00:02:57.495 net/mvneta: not in enabled drivers build config 00:02:57.495 net/mvpp2: not in enabled drivers build config 00:02:57.495 net/netvsc: not in enabled drivers build config 00:02:57.495 net/nfb: not in enabled drivers build config 00:02:57.495 net/nfp: not in enabled drivers build config 00:02:57.495 net/ngbe: not in enabled drivers build config 00:02:57.495 net/null: not in enabled drivers build config 00:02:57.495 net/octeontx: not in enabled drivers build config 00:02:57.495 net/octeon_ep: not in enabled drivers build config 00:02:57.495 net/pcap: not in enabled drivers build config 00:02:57.495 net/pfe: not in enabled drivers build config 00:02:57.495 net/qede: not in enabled drivers build config 00:02:57.495 net/ring: not in enabled drivers build config 00:02:57.495 net/sfc: not in enabled drivers build config 00:02:57.495 net/softnic: not in enabled drivers build config 00:02:57.495 net/tap: not in enabled drivers build config 00:02:57.495 net/thunderx: not in enabled drivers build config 00:02:57.495 net/txgbe: not in enabled drivers build config 00:02:57.495 net/vdev_netvsc: not in enabled drivers build config 00:02:57.495 net/vhost: not in enabled drivers build config 00:02:57.495 net/virtio: not in enabled drivers build config 00:02:57.495 net/vmxnet3: not in enabled drivers build config 00:02:57.495 raw/*: missing internal dependency, "rawdev" 00:02:57.495 crypto/armv8: not in enabled drivers build config 00:02:57.495 crypto/bcmfs: not in enabled drivers build config 00:02:57.495 crypto/caam_jr: not in enabled drivers build config 00:02:57.495 crypto/ccp: not in enabled drivers build config 00:02:57.495 crypto/cnxk: not in enabled drivers build config 00:02:57.495 crypto/dpaa_sec: not in enabled drivers build config 00:02:57.495 crypto/dpaa2_sec: not in enabled drivers build config 00:02:57.495 crypto/ipsec_mb: not in enabled drivers build config 00:02:57.495 crypto/mlx5: not in enabled drivers build config 00:02:57.495 crypto/mvsam: not in enabled drivers build config 00:02:57.495 crypto/nitrox: not in enabled drivers build config 00:02:57.495 crypto/null: not in enabled drivers build config 00:02:57.495 crypto/octeontx: not in enabled drivers build config 00:02:57.495 crypto/openssl: not in enabled drivers build config 00:02:57.495 crypto/scheduler: not in enabled drivers build config 00:02:57.495 crypto/uadk: not in enabled drivers build config 00:02:57.495 crypto/virtio: not in enabled drivers build config 00:02:57.495 compress/isal: not in enabled drivers build config 00:02:57.495 compress/mlx5: not in enabled drivers build config 00:02:57.495 compress/nitrox: not in enabled drivers build config 00:02:57.495 compress/octeontx: not in enabled drivers build config 00:02:57.495 compress/zlib: not in enabled drivers build config 00:02:57.495 regex/*: missing internal dependency, "regexdev" 00:02:57.495 ml/*: missing internal dependency, "mldev" 00:02:57.495 vdpa/ifc: not in enabled drivers build config 00:02:57.495 vdpa/mlx5: not in enabled drivers build config 00:02:57.495 vdpa/nfp: not in enabled drivers build config 00:02:57.495 vdpa/sfc: not in enabled drivers build config 00:02:57.495 event/*: missing internal dependency, "eventdev" 00:02:57.495 baseband/*: missing internal dependency, "bbdev" 00:02:57.495 gpu/*: missing internal dependency, "gpudev" 00:02:57.495 00:02:57.495 00:02:57.495 Build targets in project: 84 00:02:57.495 00:02:57.496 DPDK 24.03.0 00:02:57.496 00:02:57.496 User defined options 00:02:57.496 buildtype : debug 00:02:57.496 default_library : shared 00:02:57.496 libdir : lib 00:02:57.496 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:57.496 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:57.496 c_link_args : 00:02:57.496 cpu_instruction_set: native 00:02:57.496 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:57.496 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:57.496 enable_docs : false 00:02:57.496 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:57.496 enable_kmods : false 00:02:57.496 max_lcores : 128 00:02:57.496 tests : false 00:02:57.496 00:02:57.496 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.496 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:57.496 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:57.496 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:57.496 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.496 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:57.496 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:57.496 [6/267] Linking static target lib/librte_kvargs.a 00:02:57.496 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:57.496 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.496 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:57.496 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:57.496 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:57.496 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:57.496 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:57.496 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:57.496 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:57.496 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:57.755 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:57.755 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:57.755 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:57.755 [20/267] Linking static target lib/librte_log.a 00:02:57.755 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:57.755 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:57.755 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:57.755 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:57.755 [25/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:57.755 [26/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:57.755 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:57.755 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:57.755 [29/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:57.755 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.755 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:57.755 [32/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:57.755 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:57.755 [34/267] Linking static target lib/librte_pci.a 00:02:57.755 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:57.755 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.755 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.755 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:58.014 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:58.014 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.014 [41/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.014 [42/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.014 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:58.014 [44/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:58.014 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:58.014 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:58.014 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:58.014 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:58.014 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:58.014 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:58.014 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:58.014 [52/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:58.014 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:58.014 [54/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.014 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:58.014 [56/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:58.014 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:58.014 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:58.014 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:58.014 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:58.014 [61/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:58.014 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.014 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:58.014 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:58.014 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:58.014 [66/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.014 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:58.014 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:58.014 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:58.014 [70/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:58.014 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:58.014 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.014 [73/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:58.014 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.014 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:58.014 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:58.014 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:58.014 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:58.014 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:58.014 [80/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.014 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:58.014 [82/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.014 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:58.014 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:58.014 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.014 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.014 [87/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:58.014 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:58.014 [89/267] Linking static target lib/librte_meter.a 00:02:58.014 [90/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:58.014 [91/267] Linking static target lib/librte_ring.a 00:02:58.014 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:58.014 [93/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:58.014 [94/267] Linking static target lib/librte_telemetry.a 00:02:58.014 [95/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:58.014 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:58.014 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:58.014 [98/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:58.014 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:58.014 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:58.014 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:58.014 [102/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:58.014 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:58.014 [104/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:58.014 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:58.014 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:58.014 [107/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:58.014 [108/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:58.014 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.014 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.274 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:58.274 [112/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:58.274 [113/267] Linking static target lib/librte_timer.a 00:02:58.274 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.274 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:58.274 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:58.274 [117/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:58.274 [118/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.274 [119/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:58.274 [120/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:58.274 [121/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:58.274 [122/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:58.274 [123/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:58.274 [124/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.274 [125/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:58.274 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.274 [127/267] Linking static target lib/librte_cmdline.a 00:02:58.274 [128/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:58.274 [129/267] Linking static target lib/librte_power.a 00:02:58.274 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:58.274 [131/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.274 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.274 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.274 [134/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:58.275 [135/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:58.275 [136/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.275 [137/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.275 [138/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.275 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:58.275 [140/267] Linking static target lib/librte_rcu.a 00:02:58.275 [141/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.275 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.275 [143/267] Linking static target lib/librte_mempool.a 00:02:58.275 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:58.275 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:58.275 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:58.275 [147/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.275 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.275 [149/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.275 [150/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:58.275 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:58.275 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:58.275 [153/267] Linking static target lib/librte_dmadev.a 00:02:58.275 [154/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:58.275 [155/267] Linking static target lib/librte_reorder.a 00:02:58.275 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:58.275 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:58.275 [158/267] Linking static target lib/librte_eal.a 00:02:58.275 [159/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:58.275 [160/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.275 [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.275 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.275 [163/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.275 [164/267] Linking target lib/librte_log.so.24.1 00:02:58.275 [165/267] Linking static target lib/librte_compressdev.a 00:02:58.275 [166/267] Linking static target lib/librte_net.a 00:02:58.275 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:58.275 [168/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:58.275 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:58.275 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.275 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.275 [172/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.275 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:58.275 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:58.275 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:58.275 [176/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.275 [177/267] Linking static target lib/librte_mbuf.a 00:02:58.275 [178/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:58.275 [179/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.275 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.275 [181/267] Linking static target lib/librte_security.a 00:02:58.275 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:58.275 [183/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.536 [184/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.536 [185/267] Linking target lib/librte_kvargs.so.24.1 00:02:58.536 [186/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.536 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.536 [188/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.536 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.536 [190/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.536 [191/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.536 [192/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.536 [193/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.536 [194/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.536 [195/267] Linking static target lib/librte_hash.a 00:02:58.536 [196/267] Linking static target drivers/librte_bus_vdev.a 00:02:58.536 [197/267] Linking static target drivers/librte_bus_pci.a 00:02:58.536 [198/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:58.536 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.536 [200/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:58.536 [201/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.536 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.536 [203/267] Linking static target drivers/librte_mempool_ring.a 00:02:58.536 [204/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.536 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.536 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.797 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.797 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.797 [209/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:58.797 [210/267] Linking static target lib/librte_cryptodev.a 00:02:58.797 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.797 [212/267] Linking target lib/librte_telemetry.so.24.1 00:02:58.797 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:58.797 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.058 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.058 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.058 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.058 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.058 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.058 [220/267] Linking static target lib/librte_ethdev.a 00:02:59.318 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.318 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.318 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.318 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.578 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.578 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.839 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.839 [228/267] Linking static target lib/librte_vhost.a 00:03:00.778 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.166 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.744 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.316 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.576 [233/267] Linking target lib/librte_eal.so.24.1 00:03:09.576 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:09.576 [235/267] Linking target lib/librte_dmadev.so.24.1 00:03:09.576 [236/267] Linking target lib/librte_ring.so.24.1 00:03:09.576 [237/267] Linking target lib/librte_pci.so.24.1 00:03:09.576 [238/267] Linking target lib/librte_meter.so.24.1 00:03:09.576 [239/267] Linking target lib/librte_timer.so.24.1 00:03:09.576 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:09.836 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:09.836 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:09.836 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:09.836 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:09.836 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:09.836 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:09.836 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:09.836 [248/267] Linking target lib/librte_mempool.so.24.1 00:03:09.836 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:10.097 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:10.097 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:10.097 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:10.097 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:10.097 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:03:10.097 [255/267] Linking target lib/librte_net.so.24.1 00:03:10.097 [256/267] Linking target lib/librte_compressdev.so.24.1 00:03:10.097 [257/267] Linking target lib/librte_reorder.so.24.1 00:03:10.358 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:10.358 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:10.358 [260/267] Linking target lib/librte_security.so.24.1 00:03:10.358 [261/267] Linking target lib/librte_cmdline.so.24.1 00:03:10.358 [262/267] Linking target lib/librte_hash.so.24.1 00:03:10.358 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:10.619 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:10.619 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:10.619 [266/267] Linking target lib/librte_power.so.24.1 00:03:10.619 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:10.619 INFO: autodetecting backend as ninja 00:03:10.619 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:15.911 CC lib/ut_mock/mock.o 00:03:15.911 CC lib/log/log.o 00:03:15.911 CC lib/log/log_flags.o 00:03:15.911 CC lib/ut/ut.o 00:03:15.911 CC lib/log/log_deprecated.o 00:03:15.911 LIB libspdk_ut.a 00:03:15.911 LIB libspdk_ut_mock.a 00:03:15.911 LIB libspdk_log.a 00:03:15.911 SO libspdk_ut.so.2.0 00:03:15.911 SO libspdk_ut_mock.so.6.0 00:03:15.911 SO libspdk_log.so.7.1 00:03:15.911 SYMLINK libspdk_ut.so 00:03:15.911 SYMLINK libspdk_ut_mock.so 00:03:15.911 SYMLINK libspdk_log.so 00:03:15.911 CC lib/dma/dma.o 00:03:15.911 CC lib/ioat/ioat.o 00:03:15.911 CXX lib/trace_parser/trace.o 00:03:15.911 CC lib/util/base64.o 00:03:15.911 CC lib/util/bit_array.o 00:03:15.911 CC lib/util/cpuset.o 00:03:15.911 CC lib/util/crc16.o 00:03:15.911 CC lib/util/crc32.o 00:03:15.911 CC lib/util/crc32c.o 00:03:15.911 CC lib/util/crc32_ieee.o 00:03:15.911 CC lib/util/crc64.o 00:03:15.911 CC lib/util/dif.o 00:03:15.911 CC lib/util/fd.o 00:03:15.911 CC lib/util/fd_group.o 00:03:15.911 CC lib/util/file.o 00:03:15.911 CC lib/util/hexlify.o 00:03:15.911 CC lib/util/iov.o 00:03:15.911 CC lib/util/math.o 00:03:15.911 CC lib/util/net.o 00:03:15.911 CC lib/util/pipe.o 00:03:15.911 CC lib/util/strerror_tls.o 00:03:15.911 CC lib/util/string.o 00:03:15.911 CC lib/util/uuid.o 00:03:15.911 CC lib/util/xor.o 00:03:15.911 CC lib/util/zipf.o 00:03:15.911 CC lib/util/md5.o 00:03:15.911 CC lib/vfio_user/host/vfio_user.o 00:03:15.911 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.911 LIB libspdk_dma.a 00:03:15.911 SO libspdk_dma.so.5.0 00:03:16.173 LIB libspdk_ioat.a 00:03:16.173 SYMLINK libspdk_dma.so 00:03:16.173 SO libspdk_ioat.so.7.0 00:03:16.173 SYMLINK libspdk_ioat.so 00:03:16.173 LIB libspdk_vfio_user.a 00:03:16.173 SO libspdk_vfio_user.so.5.0 00:03:16.173 LIB libspdk_util.a 00:03:16.436 SYMLINK libspdk_vfio_user.so 00:03:16.436 SO libspdk_util.so.10.1 00:03:16.436 SYMLINK libspdk_util.so 00:03:16.697 LIB libspdk_trace_parser.a 00:03:16.697 SO libspdk_trace_parser.so.6.0 00:03:16.697 SYMLINK libspdk_trace_parser.so 00:03:16.957 CC lib/rdma_utils/rdma_utils.o 00:03:16.957 CC lib/json/json_parse.o 00:03:16.957 CC lib/json/json_util.o 00:03:16.957 CC lib/json/json_write.o 00:03:16.957 CC lib/idxd/idxd.o 00:03:16.957 CC lib/idxd/idxd_user.o 00:03:16.957 CC lib/conf/conf.o 00:03:16.957 CC lib/idxd/idxd_kernel.o 00:03:16.957 CC lib/vmd/vmd.o 00:03:16.957 CC lib/env_dpdk/env.o 00:03:16.957 CC lib/vmd/led.o 00:03:16.958 CC lib/env_dpdk/memory.o 00:03:16.958 CC lib/env_dpdk/pci.o 00:03:16.958 CC lib/env_dpdk/init.o 00:03:16.958 CC lib/env_dpdk/threads.o 00:03:16.958 CC lib/env_dpdk/pci_ioat.o 00:03:16.958 CC lib/env_dpdk/pci_virtio.o 00:03:16.958 CC lib/env_dpdk/pci_vmd.o 00:03:16.958 CC lib/env_dpdk/pci_idxd.o 00:03:16.958 CC lib/env_dpdk/pci_event.o 00:03:16.958 CC lib/env_dpdk/sigbus_handler.o 00:03:16.958 CC lib/env_dpdk/pci_dpdk.o 00:03:16.958 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.958 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.219 LIB libspdk_json.a 00:03:17.219 LIB libspdk_conf.a 00:03:17.219 LIB libspdk_rdma_utils.a 00:03:17.219 SO libspdk_json.so.6.0 00:03:17.219 SO libspdk_conf.so.6.0 00:03:17.219 SO libspdk_rdma_utils.so.1.0 00:03:17.219 SYMLINK libspdk_json.so 00:03:17.219 SYMLINK libspdk_conf.so 00:03:17.219 SYMLINK libspdk_rdma_utils.so 00:03:17.480 LIB libspdk_idxd.a 00:03:17.480 SO libspdk_idxd.so.12.1 00:03:17.480 LIB libspdk_vmd.a 00:03:17.480 SO libspdk_vmd.so.6.0 00:03:17.480 SYMLINK libspdk_idxd.so 00:03:17.480 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.480 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.480 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.480 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.480 SYMLINK libspdk_vmd.so 00:03:17.480 CC lib/rdma_provider/common.o 00:03:17.480 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:17.742 LIB libspdk_jsonrpc.a 00:03:17.742 LIB libspdk_rdma_provider.a 00:03:17.742 SO libspdk_jsonrpc.so.6.0 00:03:17.742 SO libspdk_rdma_provider.so.7.0 00:03:18.004 SYMLINK libspdk_jsonrpc.so 00:03:18.004 SYMLINK libspdk_rdma_provider.so 00:03:18.004 LIB libspdk_env_dpdk.a 00:03:18.004 SO libspdk_env_dpdk.so.15.1 00:03:18.265 CC lib/rpc/rpc.o 00:03:18.265 SYMLINK libspdk_env_dpdk.so 00:03:18.526 LIB libspdk_rpc.a 00:03:18.526 SO libspdk_rpc.so.6.0 00:03:18.526 SYMLINK libspdk_rpc.so 00:03:18.787 CC lib/keyring/keyring.o 00:03:18.787 CC lib/keyring/keyring_rpc.o 00:03:18.787 CC lib/trace/trace.o 00:03:18.787 CC lib/trace/trace_flags.o 00:03:18.787 CC lib/trace/trace_rpc.o 00:03:18.787 CC lib/notify/notify.o 00:03:18.787 CC lib/notify/notify_rpc.o 00:03:19.049 LIB libspdk_notify.a 00:03:19.049 LIB libspdk_keyring.a 00:03:19.049 SO libspdk_notify.so.6.0 00:03:19.049 LIB libspdk_trace.a 00:03:19.049 SO libspdk_keyring.so.2.0 00:03:19.308 SYMLINK libspdk_notify.so 00:03:19.308 SO libspdk_trace.so.11.0 00:03:19.308 SYMLINK libspdk_keyring.so 00:03:19.308 SYMLINK libspdk_trace.so 00:03:19.569 CC lib/sock/sock.o 00:03:19.569 CC lib/sock/sock_rpc.o 00:03:19.569 CC lib/thread/thread.o 00:03:19.569 CC lib/thread/iobuf.o 00:03:20.142 LIB libspdk_sock.a 00:03:20.142 SO libspdk_sock.so.10.0 00:03:20.142 SYMLINK libspdk_sock.so 00:03:20.403 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:20.403 CC lib/nvme/nvme_ctrlr.o 00:03:20.403 CC lib/nvme/nvme_fabric.o 00:03:20.403 CC lib/nvme/nvme_ns_cmd.o 00:03:20.403 CC lib/nvme/nvme_pcie_common.o 00:03:20.403 CC lib/nvme/nvme_ns.o 00:03:20.403 CC lib/nvme/nvme_pcie.o 00:03:20.403 CC lib/nvme/nvme_qpair.o 00:03:20.403 CC lib/nvme/nvme.o 00:03:20.403 CC lib/nvme/nvme_quirks.o 00:03:20.403 CC lib/nvme/nvme_transport.o 00:03:20.403 CC lib/nvme/nvme_discovery.o 00:03:20.403 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:20.403 CC lib/nvme/nvme_io_msg.o 00:03:20.403 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:20.403 CC lib/nvme/nvme_tcp.o 00:03:20.403 CC lib/nvme/nvme_opal.o 00:03:20.403 CC lib/nvme/nvme_poll_group.o 00:03:20.403 CC lib/nvme/nvme_zns.o 00:03:20.403 CC lib/nvme/nvme_stubs.o 00:03:20.403 CC lib/nvme/nvme_auth.o 00:03:20.403 CC lib/nvme/nvme_cuse.o 00:03:20.403 CC lib/nvme/nvme_vfio_user.o 00:03:20.403 CC lib/nvme/nvme_rdma.o 00:03:20.977 LIB libspdk_thread.a 00:03:20.977 SO libspdk_thread.so.11.0 00:03:20.977 SYMLINK libspdk_thread.so 00:03:21.238 CC lib/init/json_config.o 00:03:21.238 CC lib/vfu_tgt/tgt_endpoint.o 00:03:21.238 CC lib/blob/blobstore.o 00:03:21.238 CC lib/vfu_tgt/tgt_rpc.o 00:03:21.238 CC lib/blob/request.o 00:03:21.238 CC lib/init/subsystem.o 00:03:21.238 CC lib/blob/zeroes.o 00:03:21.238 CC lib/init/subsystem_rpc.o 00:03:21.238 CC lib/blob/blob_bs_dev.o 00:03:21.238 CC lib/init/rpc.o 00:03:21.238 CC lib/fsdev/fsdev.o 00:03:21.238 CC lib/fsdev/fsdev_io.o 00:03:21.238 CC lib/fsdev/fsdev_rpc.o 00:03:21.499 CC lib/virtio/virtio.o 00:03:21.499 CC lib/virtio/virtio_vhost_user.o 00:03:21.499 CC lib/accel/accel.o 00:03:21.499 CC lib/virtio/virtio_vfio_user.o 00:03:21.499 CC lib/accel/accel_rpc.o 00:03:21.499 CC lib/accel/accel_sw.o 00:03:21.499 CC lib/virtio/virtio_pci.o 00:03:21.761 LIB libspdk_init.a 00:03:21.761 SO libspdk_init.so.6.0 00:03:21.761 LIB libspdk_vfu_tgt.a 00:03:21.761 LIB libspdk_virtio.a 00:03:21.761 SYMLINK libspdk_init.so 00:03:21.761 SO libspdk_vfu_tgt.so.3.0 00:03:21.761 SO libspdk_virtio.so.7.0 00:03:21.761 SYMLINK libspdk_vfu_tgt.so 00:03:21.761 SYMLINK libspdk_virtio.so 00:03:22.021 LIB libspdk_fsdev.a 00:03:22.021 SO libspdk_fsdev.so.2.0 00:03:22.021 CC lib/event/app.o 00:03:22.021 CC lib/event/reactor.o 00:03:22.021 CC lib/event/log_rpc.o 00:03:22.021 CC lib/event/app_rpc.o 00:03:22.021 CC lib/event/scheduler_static.o 00:03:22.021 SYMLINK libspdk_fsdev.so 00:03:22.282 LIB libspdk_accel.a 00:03:22.282 LIB libspdk_nvme.a 00:03:22.282 SO libspdk_accel.so.16.0 00:03:22.544 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:22.544 SYMLINK libspdk_accel.so 00:03:22.544 LIB libspdk_event.a 00:03:22.544 SO libspdk_nvme.so.15.0 00:03:22.544 SO libspdk_event.so.14.0 00:03:22.544 SYMLINK libspdk_event.so 00:03:22.813 SYMLINK libspdk_nvme.so 00:03:22.813 CC lib/bdev/bdev.o 00:03:22.813 CC lib/bdev/bdev_rpc.o 00:03:22.813 CC lib/bdev/bdev_zone.o 00:03:22.813 CC lib/bdev/part.o 00:03:22.813 CC lib/bdev/scsi_nvme.o 00:03:23.166 LIB libspdk_fuse_dispatcher.a 00:03:23.166 SO libspdk_fuse_dispatcher.so.1.0 00:03:23.166 SYMLINK libspdk_fuse_dispatcher.so 00:03:24.140 LIB libspdk_blob.a 00:03:24.140 SO libspdk_blob.so.11.0 00:03:24.140 SYMLINK libspdk_blob.so 00:03:24.402 CC lib/blobfs/blobfs.o 00:03:24.402 CC lib/blobfs/tree.o 00:03:24.402 CC lib/lvol/lvol.o 00:03:25.348 LIB libspdk_bdev.a 00:03:25.348 SO libspdk_bdev.so.17.0 00:03:25.348 LIB libspdk_blobfs.a 00:03:25.348 SYMLINK libspdk_bdev.so 00:03:25.348 SO libspdk_blobfs.so.10.0 00:03:25.348 LIB libspdk_lvol.a 00:03:25.348 SO libspdk_lvol.so.10.0 00:03:25.348 SYMLINK libspdk_blobfs.so 00:03:25.348 SYMLINK libspdk_lvol.so 00:03:25.610 CC lib/nvmf/ctrlr.o 00:03:25.610 CC lib/nvmf/ctrlr_discovery.o 00:03:25.610 CC lib/nvmf/ctrlr_bdev.o 00:03:25.610 CC lib/nvmf/subsystem.o 00:03:25.610 CC lib/nvmf/nvmf.o 00:03:25.610 CC lib/nvmf/nvmf_rpc.o 00:03:25.610 CC lib/nvmf/transport.o 00:03:25.610 CC lib/nvmf/tcp.o 00:03:25.610 CC lib/nvmf/stubs.o 00:03:25.610 CC lib/nvmf/mdns_server.o 00:03:25.610 CC lib/nvmf/vfio_user.o 00:03:25.610 CC lib/nvmf/rdma.o 00:03:25.610 CC lib/ublk/ublk.o 00:03:25.610 CC lib/ublk/ublk_rpc.o 00:03:25.610 CC lib/nvmf/auth.o 00:03:25.610 CC lib/nbd/nbd.o 00:03:25.610 CC lib/nbd/nbd_rpc.o 00:03:25.610 CC lib/scsi/dev.o 00:03:25.610 CC lib/scsi/lun.o 00:03:25.610 CC lib/ftl/ftl_core.o 00:03:25.610 CC lib/ftl/ftl_init.o 00:03:25.610 CC lib/scsi/port.o 00:03:25.610 CC lib/ftl/ftl_layout.o 00:03:25.610 CC lib/scsi/scsi.o 00:03:25.610 CC lib/ftl/ftl_debug.o 00:03:25.610 CC lib/scsi/scsi_bdev.o 00:03:25.610 CC lib/ftl/ftl_io.o 00:03:25.610 CC lib/scsi/scsi_pr.o 00:03:25.610 CC lib/ftl/ftl_sb.o 00:03:25.610 CC lib/scsi/scsi_rpc.o 00:03:25.610 CC lib/ftl/ftl_l2p.o 00:03:25.610 CC lib/scsi/task.o 00:03:25.610 CC lib/ftl/ftl_l2p_flat.o 00:03:25.610 CC lib/ftl/ftl_nv_cache.o 00:03:25.610 CC lib/ftl/ftl_band.o 00:03:25.610 CC lib/ftl/ftl_band_ops.o 00:03:25.610 CC lib/ftl/ftl_writer.o 00:03:25.610 CC lib/ftl/ftl_rq.o 00:03:25.610 CC lib/ftl/ftl_reloc.o 00:03:25.610 CC lib/ftl/ftl_l2p_cache.o 00:03:25.610 CC lib/ftl/ftl_p2l.o 00:03:25.610 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:25.610 CC lib/ftl/mngt/ftl_mngt.o 00:03:25.610 CC lib/ftl/ftl_p2l_log.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.611 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:25.611 CC lib/ftl/utils/ftl_conf.o 00:03:25.611 CC lib/ftl/utils/ftl_md.o 00:03:25.611 CC lib/ftl/utils/ftl_mempool.o 00:03:25.611 CC lib/ftl/utils/ftl_property.o 00:03:25.611 CC lib/ftl/utils/ftl_bitmap.o 00:03:25.611 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:25.611 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:25.611 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:25.611 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:25.611 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:25.611 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:25.611 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:25.611 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:25.611 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:25.611 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.611 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:25.611 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:25.611 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:25.611 CC lib/ftl/ftl_trace.o 00:03:25.611 CC lib/ftl/base/ftl_base_dev.o 00:03:25.611 CC lib/ftl/base/ftl_base_bdev.o 00:03:26.180 LIB libspdk_nbd.a 00:03:26.180 SO libspdk_nbd.so.7.0 00:03:26.180 LIB libspdk_scsi.a 00:03:26.180 SYMLINK libspdk_nbd.so 00:03:26.180 SO libspdk_scsi.so.9.0 00:03:26.441 LIB libspdk_ublk.a 00:03:26.441 SYMLINK libspdk_scsi.so 00:03:26.441 SO libspdk_ublk.so.3.0 00:03:26.441 SYMLINK libspdk_ublk.so 00:03:26.703 LIB libspdk_ftl.a 00:03:26.703 CC lib/iscsi/conn.o 00:03:26.703 CC lib/iscsi/param.o 00:03:26.703 CC lib/iscsi/init_grp.o 00:03:26.703 CC lib/iscsi/iscsi.o 00:03:26.703 CC lib/iscsi/portal_grp.o 00:03:26.703 CC lib/iscsi/tgt_node.o 00:03:26.703 CC lib/iscsi/iscsi_subsystem.o 00:03:26.703 CC lib/iscsi/iscsi_rpc.o 00:03:26.703 CC lib/iscsi/task.o 00:03:26.703 CC lib/vhost/vhost.o 00:03:26.703 CC lib/vhost/vhost_rpc.o 00:03:26.703 CC lib/vhost/vhost_scsi.o 00:03:26.703 CC lib/vhost/vhost_blk.o 00:03:26.703 CC lib/vhost/rte_vhost_user.o 00:03:26.703 SO libspdk_ftl.so.9.0 00:03:26.963 SYMLINK libspdk_ftl.so 00:03:27.535 LIB libspdk_nvmf.a 00:03:27.535 SO libspdk_nvmf.so.20.0 00:03:27.796 LIB libspdk_vhost.a 00:03:27.796 SO libspdk_vhost.so.8.0 00:03:27.796 SYMLINK libspdk_nvmf.so 00:03:27.796 SYMLINK libspdk_vhost.so 00:03:27.796 LIB libspdk_iscsi.a 00:03:28.057 SO libspdk_iscsi.so.8.0 00:03:28.057 SYMLINK libspdk_iscsi.so 00:03:28.630 CC module/vfu_device/vfu_virtio.o 00:03:28.630 CC module/vfu_device/vfu_virtio_blk.o 00:03:28.630 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.630 CC module/vfu_device/vfu_virtio_scsi.o 00:03:28.630 CC module/vfu_device/vfu_virtio_rpc.o 00:03:28.630 CC module/vfu_device/vfu_virtio_fs.o 00:03:28.892 CC module/sock/posix/posix.o 00:03:28.892 LIB libspdk_env_dpdk_rpc.a 00:03:28.892 CC module/blob/bdev/blob_bdev.o 00:03:28.892 CC module/accel/error/accel_error.o 00:03:28.892 CC module/keyring/file/keyring.o 00:03:28.892 CC module/accel/error/accel_error_rpc.o 00:03:28.892 CC module/accel/ioat/accel_ioat.o 00:03:28.892 CC module/keyring/file/keyring_rpc.o 00:03:28.892 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.892 CC module/fsdev/aio/fsdev_aio.o 00:03:28.892 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:28.892 CC module/accel/dsa/accel_dsa.o 00:03:28.892 CC module/keyring/linux/keyring.o 00:03:28.892 CC module/fsdev/aio/linux_aio_mgr.o 00:03:28.892 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.892 CC module/keyring/linux/keyring_rpc.o 00:03:28.892 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.892 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.892 CC module/accel/iaa/accel_iaa.o 00:03:28.892 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.892 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.892 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.892 SYMLINK libspdk_env_dpdk_rpc.so 00:03:29.154 LIB libspdk_keyring_file.a 00:03:29.154 LIB libspdk_keyring_linux.a 00:03:29.154 LIB libspdk_scheduler_gscheduler.a 00:03:29.154 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.154 SO libspdk_keyring_file.so.2.0 00:03:29.154 LIB libspdk_accel_ioat.a 00:03:29.154 LIB libspdk_accel_error.a 00:03:29.154 SO libspdk_keyring_linux.so.1.0 00:03:29.154 SO libspdk_scheduler_gscheduler.so.4.0 00:03:29.154 LIB libspdk_accel_iaa.a 00:03:29.154 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:29.154 LIB libspdk_scheduler_dynamic.a 00:03:29.154 SO libspdk_accel_ioat.so.6.0 00:03:29.154 SO libspdk_accel_error.so.2.0 00:03:29.154 SO libspdk_accel_iaa.so.3.0 00:03:29.154 SO libspdk_scheduler_dynamic.so.4.0 00:03:29.154 SYMLINK libspdk_keyring_file.so 00:03:29.154 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.154 LIB libspdk_accel_dsa.a 00:03:29.154 LIB libspdk_blob_bdev.a 00:03:29.154 SYMLINK libspdk_keyring_linux.so 00:03:29.154 SYMLINK libspdk_scheduler_gscheduler.so 00:03:29.154 SYMLINK libspdk_accel_ioat.so 00:03:29.154 SO libspdk_accel_dsa.so.5.0 00:03:29.154 SYMLINK libspdk_accel_error.so 00:03:29.154 SO libspdk_blob_bdev.so.11.0 00:03:29.154 SYMLINK libspdk_scheduler_dynamic.so 00:03:29.154 SYMLINK libspdk_accel_iaa.so 00:03:29.154 LIB libspdk_vfu_device.a 00:03:29.154 SYMLINK libspdk_blob_bdev.so 00:03:29.154 SO libspdk_vfu_device.so.3.0 00:03:29.154 SYMLINK libspdk_accel_dsa.so 00:03:29.415 SYMLINK libspdk_vfu_device.so 00:03:29.415 LIB libspdk_fsdev_aio.a 00:03:29.415 LIB libspdk_sock_posix.a 00:03:29.677 SO libspdk_fsdev_aio.so.1.0 00:03:29.677 SO libspdk_sock_posix.so.6.0 00:03:29.677 SYMLINK libspdk_fsdev_aio.so 00:03:29.677 SYMLINK libspdk_sock_posix.so 00:03:29.938 CC module/bdev/gpt/gpt.o 00:03:29.938 CC module/bdev/gpt/vbdev_gpt.o 00:03:29.938 CC module/bdev/error/vbdev_error.o 00:03:29.938 CC module/bdev/error/vbdev_error_rpc.o 00:03:29.938 CC module/bdev/nvme/bdev_nvme.o 00:03:29.938 CC module/bdev/delay/vbdev_delay.o 00:03:29.938 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.938 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:29.938 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.938 CC module/bdev/nvme/nvme_rpc.o 00:03:29.938 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.938 CC module/bdev/nvme/vbdev_opal.o 00:03:29.938 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.938 CC module/bdev/aio/bdev_aio.o 00:03:29.938 CC module/bdev/lvol/vbdev_lvol.o 00:03:29.938 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.938 CC module/bdev/null/bdev_null.o 00:03:29.938 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.938 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.938 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.938 CC module/bdev/malloc/bdev_malloc.o 00:03:29.938 CC module/bdev/null/bdev_null_rpc.o 00:03:29.938 CC module/bdev/ftl/bdev_ftl.o 00:03:29.938 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.938 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.938 CC module/blobfs/bdev/blobfs_bdev.o 00:03:29.938 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:29.938 CC module/bdev/raid/bdev_raid.o 00:03:29.938 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.938 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.938 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.938 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:29.938 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.938 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.938 CC module/bdev/raid/raid0.o 00:03:29.938 CC module/bdev/split/vbdev_split.o 00:03:29.938 CC module/bdev/iscsi/bdev_iscsi.o 00:03:29.938 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.938 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.938 CC module/bdev/raid/raid1.o 00:03:29.938 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.938 CC module/bdev/raid/concat.o 00:03:30.198 LIB libspdk_blobfs_bdev.a 00:03:30.198 SO libspdk_blobfs_bdev.so.6.0 00:03:30.198 LIB libspdk_bdev_error.a 00:03:30.198 LIB libspdk_bdev_split.a 00:03:30.198 LIB libspdk_bdev_gpt.a 00:03:30.198 SO libspdk_bdev_split.so.6.0 00:03:30.198 LIB libspdk_bdev_null.a 00:03:30.198 SO libspdk_bdev_gpt.so.6.0 00:03:30.198 SO libspdk_bdev_error.so.6.0 00:03:30.199 LIB libspdk_bdev_passthru.a 00:03:30.199 LIB libspdk_bdev_ftl.a 00:03:30.199 SYMLINK libspdk_blobfs_bdev.so 00:03:30.199 SO libspdk_bdev_null.so.6.0 00:03:30.199 LIB libspdk_bdev_aio.a 00:03:30.199 SO libspdk_bdev_passthru.so.6.0 00:03:30.199 SYMLINK libspdk_bdev_gpt.so 00:03:30.199 SO libspdk_bdev_ftl.so.6.0 00:03:30.199 LIB libspdk_bdev_zone_block.a 00:03:30.199 SYMLINK libspdk_bdev_split.so 00:03:30.199 LIB libspdk_bdev_iscsi.a 00:03:30.199 SYMLINK libspdk_bdev_error.so 00:03:30.199 LIB libspdk_bdev_delay.a 00:03:30.199 SO libspdk_bdev_aio.so.6.0 00:03:30.199 SO libspdk_bdev_zone_block.so.6.0 00:03:30.199 SO libspdk_bdev_iscsi.so.6.0 00:03:30.199 LIB libspdk_bdev_malloc.a 00:03:30.199 SYMLINK libspdk_bdev_null.so 00:03:30.199 SO libspdk_bdev_delay.so.6.0 00:03:30.199 SYMLINK libspdk_bdev_passthru.so 00:03:30.199 SYMLINK libspdk_bdev_ftl.so 00:03:30.199 SO libspdk_bdev_malloc.so.6.0 00:03:30.199 SYMLINK libspdk_bdev_aio.so 00:03:30.459 SYMLINK libspdk_bdev_zone_block.so 00:03:30.459 SYMLINK libspdk_bdev_iscsi.so 00:03:30.459 SYMLINK libspdk_bdev_delay.so 00:03:30.459 LIB libspdk_bdev_lvol.a 00:03:30.459 SYMLINK libspdk_bdev_malloc.so 00:03:30.459 LIB libspdk_bdev_virtio.a 00:03:30.459 SO libspdk_bdev_lvol.so.6.0 00:03:30.459 SO libspdk_bdev_virtio.so.6.0 00:03:30.459 SYMLINK libspdk_bdev_lvol.so 00:03:30.459 SYMLINK libspdk_bdev_virtio.so 00:03:30.721 LIB libspdk_bdev_raid.a 00:03:30.721 SO libspdk_bdev_raid.so.6.0 00:03:30.981 SYMLINK libspdk_bdev_raid.so 00:03:32.369 LIB libspdk_bdev_nvme.a 00:03:32.369 SO libspdk_bdev_nvme.so.7.1 00:03:32.369 SYMLINK libspdk_bdev_nvme.so 00:03:32.943 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.943 CC module/event/subsystems/vmd/vmd.o 00:03:32.943 CC module/event/subsystems/sock/sock.o 00:03:32.943 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.943 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.943 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.943 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.943 CC module/event/subsystems/keyring/keyring.o 00:03:32.943 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:32.943 CC module/event/subsystems/fsdev/fsdev.o 00:03:33.206 LIB libspdk_event_keyring.a 00:03:33.206 LIB libspdk_event_scheduler.a 00:03:33.206 LIB libspdk_event_sock.a 00:03:33.206 LIB libspdk_event_vmd.a 00:03:33.206 LIB libspdk_event_fsdev.a 00:03:33.206 LIB libspdk_event_vhost_blk.a 00:03:33.206 LIB libspdk_event_vfu_tgt.a 00:03:33.206 SO libspdk_event_scheduler.so.4.0 00:03:33.206 LIB libspdk_event_iobuf.a 00:03:33.206 SO libspdk_event_keyring.so.1.0 00:03:33.206 SO libspdk_event_sock.so.5.0 00:03:33.206 SO libspdk_event_vhost_blk.so.3.0 00:03:33.206 SO libspdk_event_vmd.so.6.0 00:03:33.206 SO libspdk_event_fsdev.so.1.0 00:03:33.206 SO libspdk_event_vfu_tgt.so.3.0 00:03:33.206 SO libspdk_event_iobuf.so.3.0 00:03:33.206 SYMLINK libspdk_event_scheduler.so 00:03:33.206 SYMLINK libspdk_event_keyring.so 00:03:33.206 SYMLINK libspdk_event_sock.so 00:03:33.206 SYMLINK libspdk_event_vhost_blk.so 00:03:33.206 SYMLINK libspdk_event_fsdev.so 00:03:33.206 SYMLINK libspdk_event_vmd.so 00:03:33.206 SYMLINK libspdk_event_vfu_tgt.so 00:03:33.206 SYMLINK libspdk_event_iobuf.so 00:03:33.776 CC module/event/subsystems/accel/accel.o 00:03:33.776 LIB libspdk_event_accel.a 00:03:33.776 SO libspdk_event_accel.so.6.0 00:03:33.776 SYMLINK libspdk_event_accel.so 00:03:34.348 CC module/event/subsystems/bdev/bdev.o 00:03:34.348 LIB libspdk_event_bdev.a 00:03:34.348 SO libspdk_event_bdev.so.6.0 00:03:34.348 SYMLINK libspdk_event_bdev.so 00:03:34.922 CC module/event/subsystems/scsi/scsi.o 00:03:34.922 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.922 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.922 CC module/event/subsystems/ublk/ublk.o 00:03:34.922 CC module/event/subsystems/nbd/nbd.o 00:03:34.922 LIB libspdk_event_ublk.a 00:03:34.922 LIB libspdk_event_nbd.a 00:03:34.922 LIB libspdk_event_scsi.a 00:03:34.922 SO libspdk_event_ublk.so.3.0 00:03:34.922 SO libspdk_event_nbd.so.6.0 00:03:34.922 SO libspdk_event_scsi.so.6.0 00:03:35.184 LIB libspdk_event_nvmf.a 00:03:35.184 SYMLINK libspdk_event_ublk.so 00:03:35.184 SYMLINK libspdk_event_nbd.so 00:03:35.184 SYMLINK libspdk_event_scsi.so 00:03:35.184 SO libspdk_event_nvmf.so.6.0 00:03:35.184 SYMLINK libspdk_event_nvmf.so 00:03:35.445 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.445 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.706 LIB libspdk_event_vhost_scsi.a 00:03:35.706 LIB libspdk_event_iscsi.a 00:03:35.706 SO libspdk_event_vhost_scsi.so.3.0 00:03:35.706 SO libspdk_event_iscsi.so.6.0 00:03:35.706 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.706 SYMLINK libspdk_event_iscsi.so 00:03:35.967 SO libspdk.so.6.0 00:03:35.967 SYMLINK libspdk.so 00:03:36.228 CC app/trace_record/trace_record.o 00:03:36.228 CXX app/trace/trace.o 00:03:36.228 CC app/spdk_nvme_perf/perf.o 00:03:36.228 CC app/spdk_nvme_identify/identify.o 00:03:36.228 CC test/rpc_client/rpc_client_test.o 00:03:36.228 CC app/spdk_top/spdk_top.o 00:03:36.228 TEST_HEADER include/spdk/accel.h 00:03:36.228 TEST_HEADER include/spdk/accel_module.h 00:03:36.228 TEST_HEADER include/spdk/assert.h 00:03:36.228 TEST_HEADER include/spdk/barrier.h 00:03:36.228 CC app/spdk_lspci/spdk_lspci.o 00:03:36.228 CC app/spdk_nvme_discover/discovery_aer.o 00:03:36.228 TEST_HEADER include/spdk/base64.h 00:03:36.228 TEST_HEADER include/spdk/bdev.h 00:03:36.228 TEST_HEADER include/spdk/bdev_module.h 00:03:36.228 TEST_HEADER include/spdk/bdev_zone.h 00:03:36.228 TEST_HEADER include/spdk/bit_array.h 00:03:36.228 TEST_HEADER include/spdk/bit_pool.h 00:03:36.228 TEST_HEADER include/spdk/blob_bdev.h 00:03:36.228 TEST_HEADER include/spdk/blob.h 00:03:36.228 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:36.228 TEST_HEADER include/spdk/blobfs.h 00:03:36.228 TEST_HEADER include/spdk/conf.h 00:03:36.228 TEST_HEADER include/spdk/config.h 00:03:36.228 TEST_HEADER include/spdk/cpuset.h 00:03:36.228 TEST_HEADER include/spdk/crc16.h 00:03:36.228 TEST_HEADER include/spdk/crc32.h 00:03:36.228 TEST_HEADER include/spdk/crc64.h 00:03:36.228 TEST_HEADER include/spdk/dif.h 00:03:36.228 TEST_HEADER include/spdk/dma.h 00:03:36.228 TEST_HEADER include/spdk/endian.h 00:03:36.228 TEST_HEADER include/spdk/env_dpdk.h 00:03:36.228 TEST_HEADER include/spdk/env.h 00:03:36.492 TEST_HEADER include/spdk/event.h 00:03:36.492 TEST_HEADER include/spdk/fd_group.h 00:03:36.492 TEST_HEADER include/spdk/file.h 00:03:36.492 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:36.492 TEST_HEADER include/spdk/fd.h 00:03:36.492 TEST_HEADER include/spdk/fsdev.h 00:03:36.492 CC app/spdk_dd/spdk_dd.o 00:03:36.492 TEST_HEADER include/spdk/ftl.h 00:03:36.492 TEST_HEADER include/spdk/fsdev_module.h 00:03:36.493 CC app/iscsi_tgt/iscsi_tgt.o 00:03:36.493 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:36.493 TEST_HEADER include/spdk/hexlify.h 00:03:36.493 TEST_HEADER include/spdk/gpt_spec.h 00:03:36.493 TEST_HEADER include/spdk/idxd.h 00:03:36.493 TEST_HEADER include/spdk/init.h 00:03:36.493 TEST_HEADER include/spdk/histogram_data.h 00:03:36.493 TEST_HEADER include/spdk/idxd_spec.h 00:03:36.493 TEST_HEADER include/spdk/ioat.h 00:03:36.493 CC app/nvmf_tgt/nvmf_main.o 00:03:36.493 TEST_HEADER include/spdk/ioat_spec.h 00:03:36.493 TEST_HEADER include/spdk/iscsi_spec.h 00:03:36.493 TEST_HEADER include/spdk/json.h 00:03:36.493 TEST_HEADER include/spdk/jsonrpc.h 00:03:36.493 TEST_HEADER include/spdk/keyring.h 00:03:36.493 TEST_HEADER include/spdk/keyring_module.h 00:03:36.493 TEST_HEADER include/spdk/likely.h 00:03:36.493 TEST_HEADER include/spdk/log.h 00:03:36.493 TEST_HEADER include/spdk/lvol.h 00:03:36.493 TEST_HEADER include/spdk/md5.h 00:03:36.493 TEST_HEADER include/spdk/memory.h 00:03:36.493 TEST_HEADER include/spdk/mmio.h 00:03:36.493 TEST_HEADER include/spdk/nbd.h 00:03:36.493 TEST_HEADER include/spdk/net.h 00:03:36.493 TEST_HEADER include/spdk/nvme.h 00:03:36.493 TEST_HEADER include/spdk/notify.h 00:03:36.493 TEST_HEADER include/spdk/nvme_intel.h 00:03:36.493 TEST_HEADER include/spdk/nvme_spec.h 00:03:36.493 CC app/spdk_tgt/spdk_tgt.o 00:03:36.493 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:36.493 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:36.493 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:36.493 TEST_HEADER include/spdk/nvme_zns.h 00:03:36.493 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:36.493 TEST_HEADER include/spdk/nvmf.h 00:03:36.493 TEST_HEADER include/spdk/nvmf_spec.h 00:03:36.493 TEST_HEADER include/spdk/nvmf_transport.h 00:03:36.493 TEST_HEADER include/spdk/opal.h 00:03:36.493 TEST_HEADER include/spdk/opal_spec.h 00:03:36.493 TEST_HEADER include/spdk/pci_ids.h 00:03:36.493 TEST_HEADER include/spdk/pipe.h 00:03:36.493 TEST_HEADER include/spdk/queue.h 00:03:36.493 TEST_HEADER include/spdk/reduce.h 00:03:36.493 TEST_HEADER include/spdk/rpc.h 00:03:36.493 TEST_HEADER include/spdk/scheduler.h 00:03:36.493 TEST_HEADER include/spdk/scsi_spec.h 00:03:36.493 TEST_HEADER include/spdk/scsi.h 00:03:36.493 TEST_HEADER include/spdk/stdinc.h 00:03:36.493 TEST_HEADER include/spdk/string.h 00:03:36.493 TEST_HEADER include/spdk/sock.h 00:03:36.493 TEST_HEADER include/spdk/trace.h 00:03:36.493 TEST_HEADER include/spdk/thread.h 00:03:36.493 TEST_HEADER include/spdk/trace_parser.h 00:03:36.493 TEST_HEADER include/spdk/tree.h 00:03:36.493 TEST_HEADER include/spdk/util.h 00:03:36.493 TEST_HEADER include/spdk/ublk.h 00:03:36.493 TEST_HEADER include/spdk/uuid.h 00:03:36.493 TEST_HEADER include/spdk/version.h 00:03:36.493 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:36.493 TEST_HEADER include/spdk/vhost.h 00:03:36.493 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:36.493 TEST_HEADER include/spdk/vmd.h 00:03:36.493 TEST_HEADER include/spdk/xor.h 00:03:36.493 TEST_HEADER include/spdk/zipf.h 00:03:36.493 CXX test/cpp_headers/accel_module.o 00:03:36.493 CXX test/cpp_headers/accel.o 00:03:36.493 CXX test/cpp_headers/assert.o 00:03:36.493 CXX test/cpp_headers/barrier.o 00:03:36.493 CXX test/cpp_headers/base64.o 00:03:36.493 CXX test/cpp_headers/bdev.o 00:03:36.493 CXX test/cpp_headers/bdev_zone.o 00:03:36.493 CXX test/cpp_headers/bdev_module.o 00:03:36.493 CXX test/cpp_headers/bit_array.o 00:03:36.493 CXX test/cpp_headers/bit_pool.o 00:03:36.493 CXX test/cpp_headers/blob_bdev.o 00:03:36.493 CXX test/cpp_headers/blobfs_bdev.o 00:03:36.493 CXX test/cpp_headers/blobfs.o 00:03:36.493 CXX test/cpp_headers/blob.o 00:03:36.493 CXX test/cpp_headers/conf.o 00:03:36.493 CXX test/cpp_headers/crc32.o 00:03:36.493 CXX test/cpp_headers/config.o 00:03:36.493 CXX test/cpp_headers/cpuset.o 00:03:36.493 CXX test/cpp_headers/crc16.o 00:03:36.493 CXX test/cpp_headers/crc64.o 00:03:36.493 CXX test/cpp_headers/dif.o 00:03:36.493 CXX test/cpp_headers/dma.o 00:03:36.493 CXX test/cpp_headers/endian.o 00:03:36.493 CXX test/cpp_headers/event.o 00:03:36.493 CXX test/cpp_headers/env_dpdk.o 00:03:36.493 CXX test/cpp_headers/env.o 00:03:36.493 CXX test/cpp_headers/fd_group.o 00:03:36.493 CXX test/cpp_headers/fd.o 00:03:36.493 CXX test/cpp_headers/file.o 00:03:36.493 CXX test/cpp_headers/fsdev.o 00:03:36.493 CXX test/cpp_headers/fsdev_module.o 00:03:36.493 CXX test/cpp_headers/gpt_spec.o 00:03:36.493 CXX test/cpp_headers/fuse_dispatcher.o 00:03:36.493 CXX test/cpp_headers/ftl.o 00:03:36.493 CXX test/cpp_headers/histogram_data.o 00:03:36.493 CXX test/cpp_headers/hexlify.o 00:03:36.493 CXX test/cpp_headers/idxd_spec.o 00:03:36.493 CXX test/cpp_headers/ioat.o 00:03:36.493 CXX test/cpp_headers/idxd.o 00:03:36.493 CXX test/cpp_headers/init.o 00:03:36.493 CXX test/cpp_headers/ioat_spec.o 00:03:36.493 CXX test/cpp_headers/iscsi_spec.o 00:03:36.493 CXX test/cpp_headers/json.o 00:03:36.493 CXX test/cpp_headers/jsonrpc.o 00:03:36.493 CXX test/cpp_headers/keyring_module.o 00:03:36.493 CXX test/cpp_headers/keyring.o 00:03:36.493 CXX test/cpp_headers/md5.o 00:03:36.493 CXX test/cpp_headers/log.o 00:03:36.493 CXX test/cpp_headers/lvol.o 00:03:36.493 CXX test/cpp_headers/likely.o 00:03:36.493 CXX test/cpp_headers/memory.o 00:03:36.493 CXX test/cpp_headers/mmio.o 00:03:36.493 CXX test/cpp_headers/nbd.o 00:03:36.493 CXX test/cpp_headers/nvme.o 00:03:36.493 CXX test/cpp_headers/net.o 00:03:36.493 CXX test/cpp_headers/notify.o 00:03:36.493 CXX test/cpp_headers/nvme_intel.o 00:03:36.493 CC examples/util/zipf/zipf.o 00:03:36.493 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.493 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.493 CXX test/cpp_headers/nvme_zns.o 00:03:36.493 CXX test/cpp_headers/nvme_spec.o 00:03:36.493 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.493 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.493 CXX test/cpp_headers/nvmf.o 00:03:36.493 CXX test/cpp_headers/nvmf_transport.o 00:03:36.493 CXX test/cpp_headers/nvmf_spec.o 00:03:36.493 CXX test/cpp_headers/opal.o 00:03:36.493 CXX test/cpp_headers/opal_spec.o 00:03:36.493 CXX test/cpp_headers/pci_ids.o 00:03:36.493 CXX test/cpp_headers/queue.o 00:03:36.493 CXX test/cpp_headers/pipe.o 00:03:36.493 CXX test/cpp_headers/rpc.o 00:03:36.493 CC examples/ioat/perf/perf.o 00:03:36.493 CC examples/ioat/verify/verify.o 00:03:36.493 CXX test/cpp_headers/reduce.o 00:03:36.493 CXX test/cpp_headers/scsi.o 00:03:36.493 CC test/thread/poller_perf/poller_perf.o 00:03:36.493 CXX test/cpp_headers/scheduler.o 00:03:36.493 CXX test/cpp_headers/scsi_spec.o 00:03:36.493 CXX test/cpp_headers/stdinc.o 00:03:36.493 LINK spdk_lspci 00:03:36.493 CXX test/cpp_headers/sock.o 00:03:36.493 CXX test/cpp_headers/string.o 00:03:36.493 CC test/app/stub/stub.o 00:03:36.493 CXX test/cpp_headers/thread.o 00:03:36.493 CC app/fio/nvme/fio_plugin.o 00:03:36.493 CXX test/cpp_headers/trace.o 00:03:36.493 CXX test/cpp_headers/tree.o 00:03:36.493 CXX test/cpp_headers/ublk.o 00:03:36.493 CXX test/cpp_headers/trace_parser.o 00:03:36.493 CC test/env/vtophys/vtophys.o 00:03:36.493 CXX test/cpp_headers/version.o 00:03:36.493 CXX test/cpp_headers/util.o 00:03:36.493 CXX test/cpp_headers/uuid.o 00:03:36.493 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.493 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.493 CXX test/cpp_headers/vhost.o 00:03:36.493 CXX test/cpp_headers/zipf.o 00:03:36.493 CXX test/cpp_headers/vmd.o 00:03:36.493 CXX test/cpp_headers/xor.o 00:03:36.493 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.493 CC test/env/memory/memory_ut.o 00:03:36.493 CC test/app/histogram_perf/histogram_perf.o 00:03:36.761 CC test/env/pci/pci_ut.o 00:03:36.761 CC app/fio/bdev/fio_plugin.o 00:03:36.761 CC test/app/jsoncat/jsoncat.o 00:03:36.761 LINK rpc_client_test 00:03:36.761 CC test/app/bdev_svc/bdev_svc.o 00:03:36.761 CC test/dma/test_dma/test_dma.o 00:03:36.761 LINK spdk_nvme_discover 00:03:36.761 LINK interrupt_tgt 00:03:36.761 LINK iscsi_tgt 00:03:36.761 LINK nvmf_tgt 00:03:36.761 LINK spdk_trace_record 00:03:37.021 CC test/env/mem_callbacks/mem_callbacks.o 00:03:37.021 LINK spdk_tgt 00:03:37.021 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:37.021 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:37.021 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:37.021 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:37.021 LINK spdk_dd 00:03:37.280 LINK spdk_trace 00:03:37.280 LINK poller_perf 00:03:37.280 LINK vtophys 00:03:37.280 LINK verify 00:03:37.280 LINK zipf 00:03:37.280 LINK histogram_perf 00:03:37.280 LINK env_dpdk_post_init 00:03:37.280 LINK jsoncat 00:03:37.280 LINK stub 00:03:37.280 LINK bdev_svc 00:03:37.280 LINK ioat_perf 00:03:37.539 LINK spdk_nvme_perf 00:03:37.539 LINK spdk_bdev 00:03:37.539 LINK spdk_top 00:03:37.539 CC app/vhost/vhost.o 00:03:37.539 LINK pci_ut 00:03:37.539 LINK spdk_nvme_identify 00:03:37.539 LINK spdk_nvme 00:03:37.539 LINK vhost_fuzz 00:03:37.539 LINK nvme_fuzz 00:03:37.539 LINK test_dma 00:03:37.539 CC test/event/reactor_perf/reactor_perf.o 00:03:37.539 CC test/event/event_perf/event_perf.o 00:03:37.539 CC test/event/reactor/reactor.o 00:03:37.800 CC test/event/app_repeat/app_repeat.o 00:03:37.800 CC examples/vmd/led/led.o 00:03:37.800 LINK mem_callbacks 00:03:37.800 CC examples/idxd/perf/perf.o 00:03:37.800 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.800 CC test/event/scheduler/scheduler.o 00:03:37.800 CC examples/sock/hello_world/hello_sock.o 00:03:37.800 CC examples/thread/thread/thread_ex.o 00:03:37.800 LINK vhost 00:03:37.800 LINK reactor 00:03:37.800 LINK reactor_perf 00:03:37.800 LINK event_perf 00:03:37.800 LINK app_repeat 00:03:37.800 LINK led 00:03:37.800 LINK lsvmd 00:03:38.060 LINK hello_sock 00:03:38.060 LINK scheduler 00:03:38.060 LINK thread 00:03:38.060 LINK idxd_perf 00:03:38.321 LINK memory_ut 00:03:38.321 CC test/nvme/cuse/cuse.o 00:03:38.321 CC test/nvme/aer/aer.o 00:03:38.321 CC test/nvme/overhead/overhead.o 00:03:38.321 CC test/nvme/sgl/sgl.o 00:03:38.321 CC test/nvme/err_injection/err_injection.o 00:03:38.321 CC test/blobfs/mkfs/mkfs.o 00:03:38.321 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:38.321 CC test/nvme/compliance/nvme_compliance.o 00:03:38.321 CC test/nvme/connect_stress/connect_stress.o 00:03:38.321 CC test/accel/dif/dif.o 00:03:38.321 CC test/nvme/fdp/fdp.o 00:03:38.322 CC test/nvme/reserve/reserve.o 00:03:38.322 CC test/nvme/reset/reset.o 00:03:38.322 CC test/nvme/simple_copy/simple_copy.o 00:03:38.322 CC test/nvme/startup/startup.o 00:03:38.322 CC test/nvme/e2edp/nvme_dp.o 00:03:38.322 CC test/nvme/fused_ordering/fused_ordering.o 00:03:38.322 CC test/nvme/boot_partition/boot_partition.o 00:03:38.322 CC test/lvol/esnap/esnap.o 00:03:38.582 LINK boot_partition 00:03:38.582 LINK doorbell_aers 00:03:38.582 LINK fused_ordering 00:03:38.582 CC examples/nvme/hello_world/hello_world.o 00:03:38.582 LINK err_injection 00:03:38.582 LINK startup 00:03:38.582 CC examples/nvme/abort/abort.o 00:03:38.582 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:38.582 CC examples/nvme/reconnect/reconnect.o 00:03:38.582 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.582 LINK connect_stress 00:03:38.582 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.582 CC examples/nvme/arbitration/arbitration.o 00:03:38.582 CC examples/nvme/hotplug/hotplug.o 00:03:38.582 LINK reserve 00:03:38.582 LINK mkfs 00:03:38.582 LINK overhead 00:03:38.582 LINK simple_copy 00:03:38.582 LINK sgl 00:03:38.582 LINK aer 00:03:38.582 LINK reset 00:03:38.582 LINK nvme_dp 00:03:38.582 LINK fdp 00:03:38.582 CC examples/accel/perf/accel_perf.o 00:03:38.582 LINK nvme_compliance 00:03:38.582 CC examples/blob/cli/blobcli.o 00:03:38.582 LINK iscsi_fuzz 00:03:38.582 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:38.582 CC examples/blob/hello_world/hello_blob.o 00:03:38.582 LINK cmb_copy 00:03:38.843 LINK pmr_persistence 00:03:38.843 LINK hello_world 00:03:38.843 LINK hotplug 00:03:38.843 LINK reconnect 00:03:38.843 LINK arbitration 00:03:38.843 LINK abort 00:03:38.843 LINK dif 00:03:38.843 LINK hello_blob 00:03:38.843 LINK hello_fsdev 00:03:38.843 LINK nvme_manage 00:03:39.104 LINK accel_perf 00:03:39.104 LINK blobcli 00:03:39.367 LINK cuse 00:03:39.367 CC test/bdev/bdevio/bdevio.o 00:03:39.628 CC examples/bdev/hello_world/hello_bdev.o 00:03:39.628 CC examples/bdev/bdevperf/bdevperf.o 00:03:39.890 LINK bdevio 00:03:39.890 LINK hello_bdev 00:03:40.464 LINK bdevperf 00:03:41.037 CC examples/nvmf/nvmf/nvmf.o 00:03:41.299 LINK nvmf 00:03:42.686 LINK esnap 00:03:42.686 00:03:42.686 real 0m54.712s 00:03:42.686 user 7m47.014s 00:03:42.686 sys 4m24.453s 00:03:42.686 10:57:50 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:42.686 10:57:50 make -- common/autotest_common.sh@10 -- $ set +x 00:03:42.686 ************************************ 00:03:42.686 END TEST make 00:03:42.686 ************************************ 00:03:42.686 10:57:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:42.686 10:57:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:42.686 10:57:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:42.686 10:57:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.686 10:57:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:42.686 10:57:50 -- pm/common@44 -- $ pid=3791150 00:03:42.686 10:57:50 -- pm/common@50 -- $ kill -TERM 3791150 00:03:42.686 10:57:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.686 10:57:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:42.686 10:57:50 -- pm/common@44 -- $ pid=3791151 00:03:42.686 10:57:50 -- pm/common@50 -- $ kill -TERM 3791151 00:03:42.686 10:57:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.686 10:57:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:42.686 10:57:50 -- pm/common@44 -- $ pid=3791153 00:03:42.686 10:57:50 -- pm/common@50 -- $ kill -TERM 3791153 00:03:42.686 10:57:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.686 10:57:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:42.686 10:57:50 -- pm/common@44 -- $ pid=3791180 00:03:42.686 10:57:50 -- pm/common@50 -- $ sudo -E kill -TERM 3791180 00:03:42.686 10:57:50 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:42.686 10:57:50 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:42.949 10:57:51 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:42.949 10:57:51 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:42.949 10:57:51 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:42.949 10:57:51 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:42.949 10:57:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.949 10:57:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.949 10:57:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.949 10:57:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.949 10:57:51 -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.949 10:57:51 -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.949 10:57:51 -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.949 10:57:51 -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.949 10:57:51 -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.949 10:57:51 -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.949 10:57:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.949 10:57:51 -- scripts/common.sh@344 -- # case "$op" in 00:03:42.949 10:57:51 -- scripts/common.sh@345 -- # : 1 00:03:42.949 10:57:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.949 10:57:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.949 10:57:51 -- scripts/common.sh@365 -- # decimal 1 00:03:42.949 10:57:51 -- scripts/common.sh@353 -- # local d=1 00:03:42.949 10:57:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.949 10:57:51 -- scripts/common.sh@355 -- # echo 1 00:03:42.949 10:57:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.949 10:57:51 -- scripts/common.sh@366 -- # decimal 2 00:03:42.949 10:57:51 -- scripts/common.sh@353 -- # local d=2 00:03:42.949 10:57:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.949 10:57:51 -- scripts/common.sh@355 -- # echo 2 00:03:42.949 10:57:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.949 10:57:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.949 10:57:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.949 10:57:51 -- scripts/common.sh@368 -- # return 0 00:03:42.949 10:57:51 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.949 10:57:51 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.949 --rc genhtml_branch_coverage=1 00:03:42.949 --rc genhtml_function_coverage=1 00:03:42.949 --rc genhtml_legend=1 00:03:42.949 --rc geninfo_all_blocks=1 00:03:42.949 --rc geninfo_unexecuted_blocks=1 00:03:42.949 00:03:42.949 ' 00:03:42.949 10:57:51 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.949 --rc genhtml_branch_coverage=1 00:03:42.949 --rc genhtml_function_coverage=1 00:03:42.949 --rc genhtml_legend=1 00:03:42.949 --rc geninfo_all_blocks=1 00:03:42.949 --rc geninfo_unexecuted_blocks=1 00:03:42.949 00:03:42.949 ' 00:03:42.949 10:57:51 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.949 --rc genhtml_branch_coverage=1 00:03:42.949 --rc genhtml_function_coverage=1 00:03:42.949 --rc genhtml_legend=1 00:03:42.949 --rc geninfo_all_blocks=1 00:03:42.949 --rc geninfo_unexecuted_blocks=1 00:03:42.949 00:03:42.949 ' 00:03:42.949 10:57:51 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.949 --rc genhtml_branch_coverage=1 00:03:42.949 --rc genhtml_function_coverage=1 00:03:42.949 --rc genhtml_legend=1 00:03:42.949 --rc geninfo_all_blocks=1 00:03:42.949 --rc geninfo_unexecuted_blocks=1 00:03:42.949 00:03:42.949 ' 00:03:42.949 10:57:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:42.949 10:57:51 -- nvmf/common.sh@7 -- # uname -s 00:03:42.949 10:57:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.949 10:57:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.949 10:57:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.949 10:57:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.949 10:57:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.949 10:57:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.949 10:57:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.949 10:57:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.949 10:57:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.949 10:57:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.949 10:57:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:42.949 10:57:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:42.949 10:57:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.949 10:57:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.950 10:57:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:42.950 10:57:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:42.950 10:57:51 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.950 10:57:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:42.950 10:57:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.950 10:57:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.950 10:57:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.950 10:57:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.950 10:57:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.950 10:57:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.950 10:57:51 -- paths/export.sh@5 -- # export PATH 00:03:42.950 10:57:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.950 10:57:51 -- nvmf/common.sh@51 -- # : 0 00:03:42.950 10:57:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:42.950 10:57:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:42.950 10:57:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:42.950 10:57:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.950 10:57:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.950 10:57:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:42.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:42.950 10:57:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:42.950 10:57:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:42.950 10:57:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:42.950 10:57:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:42.950 10:57:51 -- spdk/autotest.sh@32 -- # uname -s 00:03:42.950 10:57:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:42.950 10:57:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:42.950 10:57:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:42.950 10:57:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:42.950 10:57:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:42.950 10:57:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:42.950 10:57:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:42.950 10:57:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:42.950 10:57:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:42.950 10:57:51 -- spdk/autotest.sh@48 -- # udevadm_pid=3856965 00:03:42.950 10:57:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:42.950 10:57:51 -- pm/common@17 -- # local monitor 00:03:42.950 10:57:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.950 10:57:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.950 10:57:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.950 10:57:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.950 10:57:51 -- pm/common@21 -- # date +%s 00:03:42.950 10:57:51 -- pm/common@21 -- # date +%s 00:03:42.950 10:57:51 -- pm/common@25 -- # sleep 1 00:03:42.950 10:57:51 -- pm/common@21 -- # date +%s 00:03:42.950 10:57:51 -- pm/common@21 -- # date +%s 00:03:42.950 10:57:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732010271 00:03:42.950 10:57:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732010271 00:03:42.950 10:57:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732010271 00:03:42.950 10:57:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732010271 00:03:42.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732010271_collect-vmstat.pm.log 00:03:42.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732010271_collect-cpu-load.pm.log 00:03:42.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732010271_collect-cpu-temp.pm.log 00:03:43.212 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732010271_collect-bmc-pm.bmc.pm.log 00:03:44.153 10:57:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.153 10:57:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.153 10:57:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.153 10:57:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.153 10:57:52 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.153 10:57:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:44.153 10:57:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.153 10:57:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:44.153 10:57:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.153 10:57:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.153 10:57:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:44.153 10:57:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.153 10:57:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.153 10:57:52 -- common/autotest_common.sh@1457 -- # uname 00:03:44.153 10:57:52 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:44.153 10:57:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.153 10:57:52 -- common/autotest_common.sh@1477 -- # uname 00:03:44.153 10:57:52 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:44.153 10:57:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.153 10:57:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.153 lcov: LCOV version 1.15 00:03:44.153 10:57:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:59.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.067 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.075 10:58:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:14.075 10:58:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.075 10:58:22 -- common/autotest_common.sh@10 -- # set +x 00:04:14.075 10:58:22 -- spdk/autotest.sh@78 -- # rm -f 00:04:14.075 10:58:22 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.279 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:18.279 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:18.279 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:18.540 10:58:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:18.540 10:58:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:18.540 10:58:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:18.540 10:58:26 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:18.540 10:58:26 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:18.540 10:58:26 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:18.540 10:58:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:18.540 10:58:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.540 10:58:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:18.540 10:58:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:18.540 10:58:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.540 10:58:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.540 10:58:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:18.540 10:58:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:18.540 10:58:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:18.540 No valid GPT data, bailing 00:04:18.540 10:58:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.540 10:58:26 -- scripts/common.sh@394 -- # pt= 00:04:18.540 10:58:26 -- scripts/common.sh@395 -- # return 1 00:04:18.540 10:58:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:18.540 1+0 records in 00:04:18.540 1+0 records out 00:04:18.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045947 s, 228 MB/s 00:04:18.540 10:58:26 -- spdk/autotest.sh@105 -- # sync 00:04:18.540 10:58:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:18.540 10:58:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:18.540 10:58:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:26.675 10:58:34 -- spdk/autotest.sh@111 -- # uname -s 00:04:26.675 10:58:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:26.675 10:58:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:26.675 10:58:34 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:30.880 Hugepages 00:04:30.880 node hugesize free / total 00:04:30.880 node0 1048576kB 0 / 0 00:04:30.880 node0 2048kB 0 / 0 00:04:30.880 node1 1048576kB 0 / 0 00:04:30.880 node1 2048kB 0 / 0 00:04:30.880 00:04:30.880 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.880 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:30.880 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:30.880 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:30.880 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:30.880 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:30.880 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:30.880 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:30.880 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:30.880 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:30.880 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:30.880 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:30.880 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:30.880 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:30.880 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:30.880 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:30.880 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:30.880 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:30.880 10:58:38 -- spdk/autotest.sh@117 -- # uname -s 00:04:30.880 10:58:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:30.880 10:58:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:30.880 10:58:38 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.082 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:35.082 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:36.463 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:37.034 10:58:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:37.977 10:58:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:37.977 10:58:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:37.977 10:58:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:37.977 10:58:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:37.977 10:58:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:37.977 10:58:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:37.977 10:58:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:37.977 10:58:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:37.977 10:58:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:37.977 10:58:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:37.977 10:58:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:37.977 10:58:46 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.187 Waiting for block devices as requested 00:04:42.187 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:42.187 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:42.187 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:42.187 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:42.187 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:42.187 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:42.187 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:42.448 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:42.448 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:42.709 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:42.709 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:42.709 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:42.709 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:42.970 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:42.970 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:42.970 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:43.231 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:43.492 10:58:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:43.492 10:58:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:43.492 10:58:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:43.492 10:58:51 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:43.492 10:58:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:43.492 10:58:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:43.492 10:58:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:43.492 10:58:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:43.492 10:58:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:43.492 10:58:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:43.492 10:58:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:43.492 10:58:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:43.492 10:58:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:43.492 10:58:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:43.492 10:58:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:43.492 10:58:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:43.492 10:58:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:43.492 10:58:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:43.492 10:58:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:43.492 10:58:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:43.492 10:58:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:43.492 10:58:51 -- common/autotest_common.sh@1543 -- # continue 00:04:43.492 10:58:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:43.492 10:58:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.492 10:58:51 -- common/autotest_common.sh@10 -- # set +x 00:04:43.492 10:58:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:43.492 10:58:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.492 10:58:51 -- common/autotest_common.sh@10 -- # set +x 00:04:43.492 10:58:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.704 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:47.704 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:47.965 10:58:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:47.965 10:58:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.965 10:58:56 -- common/autotest_common.sh@10 -- # set +x 00:04:47.965 10:58:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:47.965 10:58:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:47.965 10:58:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:47.965 10:58:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:47.965 10:58:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:47.965 10:58:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:47.965 10:58:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:47.965 10:58:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:47.965 10:58:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:47.965 10:58:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:47.965 10:58:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.965 10:58:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:47.965 10:58:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:47.965 10:58:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:47.965 10:58:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:47.965 10:58:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:47.965 10:58:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:47.965 10:58:56 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:47.965 10:58:56 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:47.965 10:58:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:47.965 10:58:56 -- common/autotest_common.sh@1572 -- # return 0 00:04:47.965 10:58:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:47.965 10:58:56 -- common/autotest_common.sh@1580 -- # return 0 00:04:47.965 10:58:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:47.965 10:58:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:47.965 10:58:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:47.965 10:58:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:47.965 10:58:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:47.965 10:58:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.965 10:58:56 -- common/autotest_common.sh@10 -- # set +x 00:04:47.965 10:58:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:47.965 10:58:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:47.965 10:58:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.965 10:58:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.965 10:58:56 -- common/autotest_common.sh@10 -- # set +x 00:04:47.965 ************************************ 00:04:47.965 START TEST env 00:04:47.965 ************************************ 00:04:47.965 10:58:56 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.227 * Looking for test storage... 00:04:48.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.227 10:58:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.227 10:58:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.227 10:58:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.227 10:58:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.227 10:58:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.227 10:58:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.227 10:58:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.227 10:58:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.227 10:58:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.227 10:58:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.227 10:58:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.227 10:58:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:48.227 10:58:56 env -- scripts/common.sh@345 -- # : 1 00:04:48.227 10:58:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.227 10:58:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.227 10:58:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:48.227 10:58:56 env -- scripts/common.sh@353 -- # local d=1 00:04:48.227 10:58:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.227 10:58:56 env -- scripts/common.sh@355 -- # echo 1 00:04:48.227 10:58:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.227 10:58:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:48.227 10:58:56 env -- scripts/common.sh@353 -- # local d=2 00:04:48.227 10:58:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.227 10:58:56 env -- scripts/common.sh@355 -- # echo 2 00:04:48.227 10:58:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.227 10:58:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.227 10:58:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.227 10:58:56 env -- scripts/common.sh@368 -- # return 0 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.227 --rc genhtml_branch_coverage=1 00:04:48.227 --rc genhtml_function_coverage=1 00:04:48.227 --rc genhtml_legend=1 00:04:48.227 --rc geninfo_all_blocks=1 00:04:48.227 --rc geninfo_unexecuted_blocks=1 00:04:48.227 00:04:48.227 ' 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.227 --rc genhtml_branch_coverage=1 00:04:48.227 --rc genhtml_function_coverage=1 00:04:48.227 --rc genhtml_legend=1 00:04:48.227 --rc geninfo_all_blocks=1 00:04:48.227 --rc geninfo_unexecuted_blocks=1 00:04:48.227 00:04:48.227 ' 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.227 --rc genhtml_branch_coverage=1 00:04:48.227 --rc genhtml_function_coverage=1 00:04:48.227 --rc genhtml_legend=1 00:04:48.227 --rc geninfo_all_blocks=1 00:04:48.227 --rc geninfo_unexecuted_blocks=1 00:04:48.227 00:04:48.227 ' 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.227 --rc genhtml_branch_coverage=1 00:04:48.227 --rc genhtml_function_coverage=1 00:04:48.227 --rc genhtml_legend=1 00:04:48.227 --rc geninfo_all_blocks=1 00:04:48.227 --rc geninfo_unexecuted_blocks=1 00:04:48.227 00:04:48.227 ' 00:04:48.227 10:58:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.227 10:58:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.227 10:58:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.227 ************************************ 00:04:48.227 START TEST env_memory 00:04:48.227 ************************************ 00:04:48.227 10:58:56 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.227 00:04:48.227 00:04:48.227 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.227 http://cunit.sourceforge.net/ 00:04:48.227 00:04:48.227 00:04:48.227 Suite: memory 00:04:48.227 Test: alloc and free memory map ...[2024-11-19 10:58:56.526717] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:48.227 passed 00:04:48.227 Test: mem map translation ...[2024-11-19 10:58:56.552202] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:48.227 [2024-11-19 10:58:56.552230] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:48.227 [2024-11-19 10:58:56.552276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:48.227 [2024-11-19 10:58:56.552284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:48.489 passed 00:04:48.490 Test: mem map registration ...[2024-11-19 10:58:56.607503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:48.490 [2024-11-19 10:58:56.607532] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:48.490 passed 00:04:48.490 Test: mem map adjacent registrations ...passed 00:04:48.490 00:04:48.490 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.490 suites 1 1 n/a 0 0 00:04:48.490 tests 4 4 4 0 0 00:04:48.490 asserts 152 152 152 0 n/a 00:04:48.490 00:04:48.490 Elapsed time = 0.200 seconds 00:04:48.490 00:04:48.490 real 0m0.214s 00:04:48.490 user 0m0.204s 00:04:48.490 sys 0m0.009s 00:04:48.490 10:58:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.490 10:58:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:48.490 ************************************ 00:04:48.490 END TEST env_memory 00:04:48.490 ************************************ 00:04:48.490 10:58:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.490 10:58:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.490 10:58:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.490 10:58:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.490 ************************************ 00:04:48.490 START TEST env_vtophys 00:04:48.490 ************************************ 00:04:48.490 10:58:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.490 EAL: lib.eal log level changed from notice to debug 00:04:48.490 EAL: Detected lcore 0 as core 0 on socket 0 00:04:48.490 EAL: Detected lcore 1 as core 1 on socket 0 00:04:48.490 EAL: Detected lcore 2 as core 2 on socket 0 00:04:48.490 EAL: Detected lcore 3 as core 3 on socket 0 00:04:48.490 EAL: Detected lcore 4 as core 4 on socket 0 00:04:48.490 EAL: Detected lcore 5 as core 5 on socket 0 00:04:48.490 EAL: Detected lcore 6 as core 6 on socket 0 00:04:48.490 EAL: Detected lcore 7 as core 7 on socket 0 00:04:48.490 EAL: Detected lcore 8 as core 8 on socket 0 00:04:48.490 EAL: Detected lcore 9 as core 9 on socket 0 00:04:48.490 EAL: Detected lcore 10 as core 10 on socket 0 00:04:48.490 EAL: Detected lcore 11 as core 11 on socket 0 00:04:48.490 EAL: Detected lcore 12 as core 12 on socket 0 00:04:48.490 EAL: Detected lcore 13 as core 13 on socket 0 00:04:48.490 EAL: Detected lcore 14 as core 14 on socket 0 00:04:48.490 EAL: Detected lcore 15 as core 15 on socket 0 00:04:48.490 EAL: Detected lcore 16 as core 16 on socket 0 00:04:48.490 EAL: Detected lcore 17 as core 17 on socket 0 00:04:48.490 EAL: Detected lcore 18 as core 18 on socket 0 00:04:48.490 EAL: Detected lcore 19 as core 19 on socket 0 00:04:48.490 EAL: Detected lcore 20 as core 20 on socket 0 00:04:48.490 EAL: Detected lcore 21 as core 21 on socket 0 00:04:48.490 EAL: Detected lcore 22 as core 22 on socket 0 00:04:48.490 EAL: Detected lcore 23 as core 23 on socket 0 00:04:48.490 EAL: Detected lcore 24 as core 24 on socket 0 00:04:48.490 EAL: Detected lcore 25 as core 25 on socket 0 00:04:48.490 EAL: Detected lcore 26 as core 26 on socket 0 00:04:48.490 EAL: Detected lcore 27 as core 27 on socket 0 00:04:48.490 EAL: Detected lcore 28 as core 28 on socket 0 00:04:48.490 EAL: Detected lcore 29 as core 29 on socket 0 00:04:48.490 EAL: Detected lcore 30 as core 30 on socket 0 00:04:48.490 EAL: Detected lcore 31 as core 31 on socket 0 00:04:48.490 EAL: Detected lcore 32 as core 32 on socket 0 00:04:48.490 EAL: Detected lcore 33 as core 33 on socket 0 00:04:48.490 EAL: Detected lcore 34 as core 34 on socket 0 00:04:48.490 EAL: Detected lcore 35 as core 35 on socket 0 00:04:48.490 EAL: Detected lcore 36 as core 0 on socket 1 00:04:48.490 EAL: Detected lcore 37 as core 1 on socket 1 00:04:48.490 EAL: Detected lcore 38 as core 2 on socket 1 00:04:48.490 EAL: Detected lcore 39 as core 3 on socket 1 00:04:48.490 EAL: Detected lcore 40 as core 4 on socket 1 00:04:48.490 EAL: Detected lcore 41 as core 5 on socket 1 00:04:48.490 EAL: Detected lcore 42 as core 6 on socket 1 00:04:48.490 EAL: Detected lcore 43 as core 7 on socket 1 00:04:48.490 EAL: Detected lcore 44 as core 8 on socket 1 00:04:48.490 EAL: Detected lcore 45 as core 9 on socket 1 00:04:48.490 EAL: Detected lcore 46 as core 10 on socket 1 00:04:48.490 EAL: Detected lcore 47 as core 11 on socket 1 00:04:48.490 EAL: Detected lcore 48 as core 12 on socket 1 00:04:48.490 EAL: Detected lcore 49 as core 13 on socket 1 00:04:48.490 EAL: Detected lcore 50 as core 14 on socket 1 00:04:48.490 EAL: Detected lcore 51 as core 15 on socket 1 00:04:48.490 EAL: Detected lcore 52 as core 16 on socket 1 00:04:48.490 EAL: Detected lcore 53 as core 17 on socket 1 00:04:48.490 EAL: Detected lcore 54 as core 18 on socket 1 00:04:48.490 EAL: Detected lcore 55 as core 19 on socket 1 00:04:48.490 EAL: Detected lcore 56 as core 20 on socket 1 00:04:48.490 EAL: Detected lcore 57 as core 21 on socket 1 00:04:48.490 EAL: Detected lcore 58 as core 22 on socket 1 00:04:48.490 EAL: Detected lcore 59 as core 23 on socket 1 00:04:48.490 EAL: Detected lcore 60 as core 24 on socket 1 00:04:48.490 EAL: Detected lcore 61 as core 25 on socket 1 00:04:48.490 EAL: Detected lcore 62 as core 26 on socket 1 00:04:48.490 EAL: Detected lcore 63 as core 27 on socket 1 00:04:48.490 EAL: Detected lcore 64 as core 28 on socket 1 00:04:48.490 EAL: Detected lcore 65 as core 29 on socket 1 00:04:48.490 EAL: Detected lcore 66 as core 30 on socket 1 00:04:48.490 EAL: Detected lcore 67 as core 31 on socket 1 00:04:48.490 EAL: Detected lcore 68 as core 32 on socket 1 00:04:48.490 EAL: Detected lcore 69 as core 33 on socket 1 00:04:48.490 EAL: Detected lcore 70 as core 34 on socket 1 00:04:48.490 EAL: Detected lcore 71 as core 35 on socket 1 00:04:48.490 EAL: Detected lcore 72 as core 0 on socket 0 00:04:48.490 EAL: Detected lcore 73 as core 1 on socket 0 00:04:48.490 EAL: Detected lcore 74 as core 2 on socket 0 00:04:48.490 EAL: Detected lcore 75 as core 3 on socket 0 00:04:48.490 EAL: Detected lcore 76 as core 4 on socket 0 00:04:48.490 EAL: Detected lcore 77 as core 5 on socket 0 00:04:48.490 EAL: Detected lcore 78 as core 6 on socket 0 00:04:48.490 EAL: Detected lcore 79 as core 7 on socket 0 00:04:48.490 EAL: Detected lcore 80 as core 8 on socket 0 00:04:48.490 EAL: Detected lcore 81 as core 9 on socket 0 00:04:48.490 EAL: Detected lcore 82 as core 10 on socket 0 00:04:48.490 EAL: Detected lcore 83 as core 11 on socket 0 00:04:48.490 EAL: Detected lcore 84 as core 12 on socket 0 00:04:48.490 EAL: Detected lcore 85 as core 13 on socket 0 00:04:48.490 EAL: Detected lcore 86 as core 14 on socket 0 00:04:48.490 EAL: Detected lcore 87 as core 15 on socket 0 00:04:48.490 EAL: Detected lcore 88 as core 16 on socket 0 00:04:48.490 EAL: Detected lcore 89 as core 17 on socket 0 00:04:48.490 EAL: Detected lcore 90 as core 18 on socket 0 00:04:48.490 EAL: Detected lcore 91 as core 19 on socket 0 00:04:48.490 EAL: Detected lcore 92 as core 20 on socket 0 00:04:48.490 EAL: Detected lcore 93 as core 21 on socket 0 00:04:48.490 EAL: Detected lcore 94 as core 22 on socket 0 00:04:48.490 EAL: Detected lcore 95 as core 23 on socket 0 00:04:48.490 EAL: Detected lcore 96 as core 24 on socket 0 00:04:48.490 EAL: Detected lcore 97 as core 25 on socket 0 00:04:48.490 EAL: Detected lcore 98 as core 26 on socket 0 00:04:48.490 EAL: Detected lcore 99 as core 27 on socket 0 00:04:48.490 EAL: Detected lcore 100 as core 28 on socket 0 00:04:48.490 EAL: Detected lcore 101 as core 29 on socket 0 00:04:48.490 EAL: Detected lcore 102 as core 30 on socket 0 00:04:48.490 EAL: Detected lcore 103 as core 31 on socket 0 00:04:48.490 EAL: Detected lcore 104 as core 32 on socket 0 00:04:48.490 EAL: Detected lcore 105 as core 33 on socket 0 00:04:48.490 EAL: Detected lcore 106 as core 34 on socket 0 00:04:48.490 EAL: Detected lcore 107 as core 35 on socket 0 00:04:48.490 EAL: Detected lcore 108 as core 0 on socket 1 00:04:48.490 EAL: Detected lcore 109 as core 1 on socket 1 00:04:48.490 EAL: Detected lcore 110 as core 2 on socket 1 00:04:48.490 EAL: Detected lcore 111 as core 3 on socket 1 00:04:48.490 EAL: Detected lcore 112 as core 4 on socket 1 00:04:48.490 EAL: Detected lcore 113 as core 5 on socket 1 00:04:48.490 EAL: Detected lcore 114 as core 6 on socket 1 00:04:48.490 EAL: Detected lcore 115 as core 7 on socket 1 00:04:48.490 EAL: Detected lcore 116 as core 8 on socket 1 00:04:48.490 EAL: Detected lcore 117 as core 9 on socket 1 00:04:48.490 EAL: Detected lcore 118 as core 10 on socket 1 00:04:48.490 EAL: Detected lcore 119 as core 11 on socket 1 00:04:48.490 EAL: Detected lcore 120 as core 12 on socket 1 00:04:48.490 EAL: Detected lcore 121 as core 13 on socket 1 00:04:48.490 EAL: Detected lcore 122 as core 14 on socket 1 00:04:48.490 EAL: Detected lcore 123 as core 15 on socket 1 00:04:48.490 EAL: Detected lcore 124 as core 16 on socket 1 00:04:48.490 EAL: Detected lcore 125 as core 17 on socket 1 00:04:48.490 EAL: Detected lcore 126 as core 18 on socket 1 00:04:48.490 EAL: Detected lcore 127 as core 19 on socket 1 00:04:48.490 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:48.490 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:48.490 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:48.490 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:48.490 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:48.490 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:48.490 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:48.490 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:48.490 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:48.490 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:48.490 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:48.490 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:48.490 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:48.490 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:48.490 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:48.490 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:48.490 EAL: Maximum logical cores by configuration: 128 00:04:48.490 EAL: Detected CPU lcores: 128 00:04:48.490 EAL: Detected NUMA nodes: 2 00:04:48.490 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:48.490 EAL: Detected shared linkage of DPDK 00:04:48.490 EAL: No shared files mode enabled, IPC will be disabled 00:04:48.490 EAL: Bus pci wants IOVA as 'DC' 00:04:48.490 EAL: Buses did not request a specific IOVA mode. 00:04:48.490 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:48.490 EAL: Selected IOVA mode 'VA' 00:04:48.491 EAL: Probing VFIO support... 00:04:48.491 EAL: IOMMU type 1 (Type 1) is supported 00:04:48.491 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:48.491 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:48.491 EAL: VFIO support initialized 00:04:48.491 EAL: Ask a virtual area of 0x2e000 bytes 00:04:48.491 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:48.491 EAL: Setting up physically contiguous memory... 00:04:48.491 EAL: Setting maximum number of open files to 524288 00:04:48.491 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:48.491 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:48.491 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:48.491 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.491 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:48.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.491 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.491 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:48.491 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:48.491 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.491 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:48.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.491 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.491 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:48.491 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:48.491 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.491 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:48.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.491 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.491 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:48.491 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:48.491 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.491 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:48.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.491 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.491 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:48.491 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:48.491 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:48.491 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.491 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:48.491 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.491 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.491 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:48.491 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:48.491 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.491 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:48.491 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.491 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.491 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:48.491 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:48.491 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.491 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:48.491 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.491 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.491 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:48.491 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:48.491 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.491 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:48.491 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.491 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.491 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:48.491 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:48.491 EAL: Hugepages will be freed exactly as allocated. 00:04:48.491 EAL: No shared files mode enabled, IPC is disabled 00:04:48.491 EAL: No shared files mode enabled, IPC is disabled 00:04:48.491 EAL: TSC frequency is ~2400000 KHz 00:04:48.491 EAL: Main lcore 0 is ready (tid=7f6b70d35a00;cpuset=[0]) 00:04:48.491 EAL: Trying to obtain current memory policy. 00:04:48.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.491 EAL: Restoring previous memory policy: 0 00:04:48.491 EAL: request: mp_malloc_sync 00:04:48.491 EAL: No shared files mode enabled, IPC is disabled 00:04:48.491 EAL: Heap on socket 0 was expanded by 2MB 00:04:48.491 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:48.752 EAL: Mem event callback 'spdk:(nil)' registered 00:04:48.752 00:04:48.752 00:04:48.752 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.752 http://cunit.sourceforge.net/ 00:04:48.752 00:04:48.752 00:04:48.752 Suite: components_suite 00:04:48.752 Test: vtophys_malloc_test ...passed 00:04:48.752 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:48.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.752 EAL: Restoring previous memory policy: 4 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was expanded by 4MB 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was shrunk by 4MB 00:04:48.752 EAL: Trying to obtain current memory policy. 00:04:48.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.752 EAL: Restoring previous memory policy: 4 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was expanded by 6MB 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was shrunk by 6MB 00:04:48.752 EAL: Trying to obtain current memory policy. 00:04:48.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.752 EAL: Restoring previous memory policy: 4 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was expanded by 10MB 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was shrunk by 10MB 00:04:48.752 EAL: Trying to obtain current memory policy. 00:04:48.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.752 EAL: Restoring previous memory policy: 4 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was expanded by 18MB 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was shrunk by 18MB 00:04:48.752 EAL: Trying to obtain current memory policy. 00:04:48.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.752 EAL: Restoring previous memory policy: 4 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was expanded by 34MB 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was shrunk by 34MB 00:04:48.752 EAL: Trying to obtain current memory policy. 00:04:48.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.752 EAL: Restoring previous memory policy: 4 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was expanded by 66MB 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was shrunk by 66MB 00:04:48.752 EAL: Trying to obtain current memory policy. 00:04:48.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.752 EAL: Restoring previous memory policy: 4 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.752 EAL: No shared files mode enabled, IPC is disabled 00:04:48.752 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.752 EAL: Trying to obtain current memory policy. 00:04:48.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.752 EAL: Restoring previous memory policy: 4 00:04:48.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.752 EAL: request: mp_malloc_sync 00:04:48.753 EAL: No shared files mode enabled, IPC is disabled 00:04:48.753 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.753 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.753 EAL: request: mp_malloc_sync 00:04:48.753 EAL: No shared files mode enabled, IPC is disabled 00:04:48.753 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.753 EAL: Trying to obtain current memory policy. 00:04:48.753 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.013 EAL: Restoring previous memory policy: 4 00:04:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.013 EAL: request: mp_malloc_sync 00:04:49.013 EAL: No shared files mode enabled, IPC is disabled 00:04:49.013 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.013 EAL: request: mp_malloc_sync 00:04:49.013 EAL: No shared files mode enabled, IPC is disabled 00:04:49.013 EAL: Heap on socket 0 was shrunk by 514MB 00:04:49.013 EAL: Trying to obtain current memory policy. 00:04:49.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.275 EAL: Restoring previous memory policy: 4 00:04:49.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.275 EAL: request: mp_malloc_sync 00:04:49.275 EAL: No shared files mode enabled, IPC is disabled 00:04:49.275 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.275 EAL: request: mp_malloc_sync 00:04:49.275 EAL: No shared files mode enabled, IPC is disabled 00:04:49.275 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.275 passed 00:04:49.275 00:04:49.275 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.275 suites 1 1 n/a 0 0 00:04:49.275 tests 2 2 2 0 0 00:04:49.275 asserts 497 497 497 0 n/a 00:04:49.275 00:04:49.275 Elapsed time = 0.659 seconds 00:04:49.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.275 EAL: request: mp_malloc_sync 00:04:49.275 EAL: No shared files mode enabled, IPC is disabled 00:04:49.275 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.275 EAL: No shared files mode enabled, IPC is disabled 00:04:49.275 EAL: No shared files mode enabled, IPC is disabled 00:04:49.275 EAL: No shared files mode enabled, IPC is disabled 00:04:49.275 00:04:49.275 real 0m0.814s 00:04:49.275 user 0m0.420s 00:04:49.275 sys 0m0.352s 00:04:49.275 10:58:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.275 10:58:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:49.275 ************************************ 00:04:49.275 END TEST env_vtophys 00:04:49.275 ************************************ 00:04:49.275 10:58:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.275 10:58:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.275 10:58:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.275 10:58:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.536 ************************************ 00:04:49.536 START TEST env_pci 00:04:49.536 ************************************ 00:04:49.536 10:58:57 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.536 00:04:49.536 00:04:49.536 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.536 http://cunit.sourceforge.net/ 00:04:49.536 00:04:49.536 00:04:49.536 Suite: pci 00:04:49.536 Test: pci_hook ...[2024-11-19 10:58:57.662669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3877168 has claimed it 00:04:49.536 EAL: Cannot find device (10000:00:01.0) 00:04:49.536 EAL: Failed to attach device on primary process 00:04:49.536 passed 00:04:49.536 00:04:49.536 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.536 suites 1 1 n/a 0 0 00:04:49.536 tests 1 1 1 0 0 00:04:49.536 asserts 25 25 25 0 n/a 00:04:49.536 00:04:49.536 Elapsed time = 0.034 seconds 00:04:49.536 00:04:49.536 real 0m0.054s 00:04:49.536 user 0m0.018s 00:04:49.536 sys 0m0.035s 00:04:49.536 10:58:57 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.536 10:58:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:49.536 ************************************ 00:04:49.536 END TEST env_pci 00:04:49.536 ************************************ 00:04:49.536 10:58:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:49.536 10:58:57 env -- env/env.sh@15 -- # uname 00:04:49.536 10:58:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:49.536 10:58:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:49.536 10:58:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.536 10:58:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:49.536 10:58:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.536 10:58:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.536 ************************************ 00:04:49.536 START TEST env_dpdk_post_init 00:04:49.536 ************************************ 00:04:49.536 10:58:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.536 EAL: Detected CPU lcores: 128 00:04:49.536 EAL: Detected NUMA nodes: 2 00:04:49.536 EAL: Detected shared linkage of DPDK 00:04:49.536 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.536 EAL: Selected IOVA mode 'VA' 00:04:49.536 EAL: VFIO support initialized 00:04:49.536 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.798 EAL: Using IOMMU type 1 (Type 1) 00:04:49.798 EAL: Ignore mapping IO port bar(1) 00:04:49.798 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:50.059 EAL: Ignore mapping IO port bar(1) 00:04:50.059 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:50.319 EAL: Ignore mapping IO port bar(1) 00:04:50.319 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:50.581 EAL: Ignore mapping IO port bar(1) 00:04:50.581 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:50.581 EAL: Ignore mapping IO port bar(1) 00:04:50.842 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:50.842 EAL: Ignore mapping IO port bar(1) 00:04:51.104 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:51.104 EAL: Ignore mapping IO port bar(1) 00:04:51.555 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:51.555 EAL: Ignore mapping IO port bar(1) 00:04:51.555 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:51.843 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:51.843 EAL: Ignore mapping IO port bar(1) 00:04:51.843 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:52.105 EAL: Ignore mapping IO port bar(1) 00:04:52.105 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:52.366 EAL: Ignore mapping IO port bar(1) 00:04:52.366 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:52.627 EAL: Ignore mapping IO port bar(1) 00:04:52.627 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:52.627 EAL: Ignore mapping IO port bar(1) 00:04:52.888 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:52.888 EAL: Ignore mapping IO port bar(1) 00:04:53.157 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:53.157 EAL: Ignore mapping IO port bar(1) 00:04:53.157 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:53.417 EAL: Ignore mapping IO port bar(1) 00:04:53.417 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:53.417 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:53.417 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:53.678 Starting DPDK initialization... 00:04:53.678 Starting SPDK post initialization... 00:04:53.678 SPDK NVMe probe 00:04:53.678 Attaching to 0000:65:00.0 00:04:53.678 Attached to 0000:65:00.0 00:04:53.678 Cleaning up... 00:04:55.595 00:04:55.595 real 0m5.738s 00:04:55.595 user 0m0.111s 00:04:55.595 sys 0m0.172s 00:04:55.595 10:59:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.595 10:59:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.595 ************************************ 00:04:55.595 END TEST env_dpdk_post_init 00:04:55.595 ************************************ 00:04:55.595 10:59:03 env -- env/env.sh@26 -- # uname 00:04:55.595 10:59:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.595 10:59:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.595 10:59:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.595 10:59:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.595 10:59:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.595 ************************************ 00:04:55.595 START TEST env_mem_callbacks 00:04:55.595 ************************************ 00:04:55.595 10:59:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.595 EAL: Detected CPU lcores: 128 00:04:55.595 EAL: Detected NUMA nodes: 2 00:04:55.595 EAL: Detected shared linkage of DPDK 00:04:55.595 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.595 EAL: Selected IOVA mode 'VA' 00:04:55.595 EAL: VFIO support initialized 00:04:55.595 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.595 00:04:55.595 00:04:55.595 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.595 http://cunit.sourceforge.net/ 00:04:55.595 00:04:55.595 00:04:55.595 Suite: memory 00:04:55.595 Test: test ... 00:04:55.595 register 0x200000200000 2097152 00:04:55.595 malloc 3145728 00:04:55.595 register 0x200000400000 4194304 00:04:55.595 buf 0x200000500000 len 3145728 PASSED 00:04:55.595 malloc 64 00:04:55.595 buf 0x2000004fff40 len 64 PASSED 00:04:55.595 malloc 4194304 00:04:55.595 register 0x200000800000 6291456 00:04:55.595 buf 0x200000a00000 len 4194304 PASSED 00:04:55.595 free 0x200000500000 3145728 00:04:55.595 free 0x2000004fff40 64 00:04:55.595 unregister 0x200000400000 4194304 PASSED 00:04:55.595 free 0x200000a00000 4194304 00:04:55.595 unregister 0x200000800000 6291456 PASSED 00:04:55.595 malloc 8388608 00:04:55.595 register 0x200000400000 10485760 00:04:55.595 buf 0x200000600000 len 8388608 PASSED 00:04:55.595 free 0x200000600000 8388608 00:04:55.595 unregister 0x200000400000 10485760 PASSED 00:04:55.595 passed 00:04:55.595 00:04:55.595 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.595 suites 1 1 n/a 0 0 00:04:55.595 tests 1 1 1 0 0 00:04:55.595 asserts 15 15 15 0 n/a 00:04:55.595 00:04:55.595 Elapsed time = 0.008 seconds 00:04:55.595 00:04:55.595 real 0m0.068s 00:04:55.595 user 0m0.020s 00:04:55.595 sys 0m0.048s 00:04:55.595 10:59:03 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.595 10:59:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:55.595 ************************************ 00:04:55.595 END TEST env_mem_callbacks 00:04:55.595 ************************************ 00:04:55.595 00:04:55.595 real 0m7.437s 00:04:55.595 user 0m1.006s 00:04:55.595 sys 0m0.962s 00:04:55.595 10:59:03 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.595 10:59:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.595 ************************************ 00:04:55.595 END TEST env 00:04:55.595 ************************************ 00:04:55.595 10:59:03 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.595 10:59:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.595 10:59:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.595 10:59:03 -- common/autotest_common.sh@10 -- # set +x 00:04:55.595 ************************************ 00:04:55.595 START TEST rpc 00:04:55.595 ************************************ 00:04:55.595 10:59:03 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.595 * Looking for test storage... 00:04:55.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.595 10:59:03 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.595 10:59:03 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.595 10:59:03 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.856 10:59:03 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.856 10:59:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.856 10:59:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.856 10:59:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.856 10:59:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.856 10:59:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.856 10:59:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.856 10:59:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.856 10:59:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.857 10:59:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.857 10:59:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.857 10:59:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.857 10:59:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.857 10:59:03 rpc -- scripts/common.sh@345 -- # : 1 00:04:55.857 10:59:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.857 10:59:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.857 10:59:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.857 10:59:03 rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.857 10:59:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.857 10:59:03 rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.857 10:59:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.857 10:59:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.857 10:59:03 rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.857 10:59:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.857 10:59:03 rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.857 10:59:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.857 10:59:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.857 10:59:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.857 10:59:03 rpc -- scripts/common.sh@368 -- # return 0 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.857 --rc genhtml_branch_coverage=1 00:04:55.857 --rc genhtml_function_coverage=1 00:04:55.857 --rc genhtml_legend=1 00:04:55.857 --rc geninfo_all_blocks=1 00:04:55.857 --rc geninfo_unexecuted_blocks=1 00:04:55.857 00:04:55.857 ' 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.857 --rc genhtml_branch_coverage=1 00:04:55.857 --rc genhtml_function_coverage=1 00:04:55.857 --rc genhtml_legend=1 00:04:55.857 --rc geninfo_all_blocks=1 00:04:55.857 --rc geninfo_unexecuted_blocks=1 00:04:55.857 00:04:55.857 ' 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.857 --rc genhtml_branch_coverage=1 00:04:55.857 --rc genhtml_function_coverage=1 00:04:55.857 --rc genhtml_legend=1 00:04:55.857 --rc geninfo_all_blocks=1 00:04:55.857 --rc geninfo_unexecuted_blocks=1 00:04:55.857 00:04:55.857 ' 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.857 --rc genhtml_branch_coverage=1 00:04:55.857 --rc genhtml_function_coverage=1 00:04:55.857 --rc genhtml_legend=1 00:04:55.857 --rc geninfo_all_blocks=1 00:04:55.857 --rc geninfo_unexecuted_blocks=1 00:04:55.857 00:04:55.857 ' 00:04:55.857 10:59:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3878494 00:04:55.857 10:59:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.857 10:59:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3878494 00:04:55.857 10:59:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 3878494 ']' 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.857 10:59:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.857 [2024-11-19 10:59:04.059140] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:04:55.857 [2024-11-19 10:59:04.059211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878494 ] 00:04:55.857 [2024-11-19 10:59:04.146055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.857 [2024-11-19 10:59:04.187479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:55.857 [2024-11-19 10:59:04.187517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3878494' to capture a snapshot of events at runtime. 00:04:55.857 [2024-11-19 10:59:04.187525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:55.857 [2024-11-19 10:59:04.187532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:55.857 [2024-11-19 10:59:04.187538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3878494 for offline analysis/debug. 00:04:55.857 [2024-11-19 10:59:04.188174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.802 10:59:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.802 10:59:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:56.802 10:59:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.802 10:59:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.802 10:59:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:56.802 10:59:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:56.802 10:59:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.802 10:59:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.802 10:59:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.802 ************************************ 00:04:56.802 START TEST rpc_integrity 00:04:56.802 ************************************ 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.802 10:59:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.802 { 00:04:56.802 "name": "Malloc0", 00:04:56.802 "aliases": [ 00:04:56.802 "a321927d-9416-42d4-a908-53840f8b122f" 00:04:56.802 ], 00:04:56.802 "product_name": "Malloc disk", 00:04:56.802 "block_size": 512, 00:04:56.802 "num_blocks": 16384, 00:04:56.802 "uuid": "a321927d-9416-42d4-a908-53840f8b122f", 00:04:56.802 "assigned_rate_limits": { 00:04:56.802 "rw_ios_per_sec": 0, 00:04:56.802 "rw_mbytes_per_sec": 0, 00:04:56.802 "r_mbytes_per_sec": 0, 00:04:56.802 "w_mbytes_per_sec": 0 00:04:56.802 }, 00:04:56.802 "claimed": false, 00:04:56.802 "zoned": false, 00:04:56.802 "supported_io_types": { 00:04:56.802 "read": true, 00:04:56.802 "write": true, 00:04:56.802 "unmap": true, 00:04:56.802 "flush": true, 00:04:56.802 "reset": true, 00:04:56.802 "nvme_admin": false, 00:04:56.802 "nvme_io": false, 00:04:56.802 "nvme_io_md": false, 00:04:56.802 "write_zeroes": true, 00:04:56.802 "zcopy": true, 00:04:56.802 "get_zone_info": false, 00:04:56.802 "zone_management": false, 00:04:56.802 "zone_append": false, 00:04:56.802 "compare": false, 00:04:56.802 "compare_and_write": false, 00:04:56.802 "abort": true, 00:04:56.802 "seek_hole": false, 00:04:56.802 "seek_data": false, 00:04:56.802 "copy": true, 00:04:56.802 "nvme_iov_md": false 00:04:56.802 }, 00:04:56.802 "memory_domains": [ 00:04:56.802 { 00:04:56.802 "dma_device_id": "system", 00:04:56.802 "dma_device_type": 1 00:04:56.802 }, 00:04:56.802 { 00:04:56.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.802 "dma_device_type": 2 00:04:56.802 } 00:04:56.802 ], 00:04:56.802 "driver_specific": {} 00:04:56.802 } 00:04:56.802 ]' 00:04:56.802 10:59:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.802 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.802 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:56.802 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.802 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.802 [2024-11-19 10:59:05.024315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:56.802 [2024-11-19 10:59:05.024347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.802 [2024-11-19 10:59:05.024360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x853b10 00:04:56.802 [2024-11-19 10:59:05.024368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.802 [2024-11-19 10:59:05.025728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.802 [2024-11-19 10:59:05.025750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.802 Passthru0 00:04:56.802 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.802 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.802 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.802 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.802 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.802 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.802 { 00:04:56.802 "name": "Malloc0", 00:04:56.802 "aliases": [ 00:04:56.802 "a321927d-9416-42d4-a908-53840f8b122f" 00:04:56.802 ], 00:04:56.802 "product_name": "Malloc disk", 00:04:56.802 "block_size": 512, 00:04:56.802 "num_blocks": 16384, 00:04:56.802 "uuid": "a321927d-9416-42d4-a908-53840f8b122f", 00:04:56.802 "assigned_rate_limits": { 00:04:56.802 "rw_ios_per_sec": 0, 00:04:56.802 "rw_mbytes_per_sec": 0, 00:04:56.802 "r_mbytes_per_sec": 0, 00:04:56.802 "w_mbytes_per_sec": 0 00:04:56.802 }, 00:04:56.802 "claimed": true, 00:04:56.802 "claim_type": "exclusive_write", 00:04:56.802 "zoned": false, 00:04:56.802 "supported_io_types": { 00:04:56.802 "read": true, 00:04:56.802 "write": true, 00:04:56.802 "unmap": true, 00:04:56.802 "flush": true, 00:04:56.802 "reset": true, 00:04:56.802 "nvme_admin": false, 00:04:56.802 "nvme_io": false, 00:04:56.802 "nvme_io_md": false, 00:04:56.802 "write_zeroes": true, 00:04:56.802 "zcopy": true, 00:04:56.802 "get_zone_info": false, 00:04:56.802 "zone_management": false, 00:04:56.802 "zone_append": false, 00:04:56.802 "compare": false, 00:04:56.802 "compare_and_write": false, 00:04:56.802 "abort": true, 00:04:56.802 "seek_hole": false, 00:04:56.802 "seek_data": false, 00:04:56.802 "copy": true, 00:04:56.802 "nvme_iov_md": false 00:04:56.802 }, 00:04:56.802 "memory_domains": [ 00:04:56.802 { 00:04:56.802 "dma_device_id": "system", 00:04:56.802 "dma_device_type": 1 00:04:56.802 }, 00:04:56.802 { 00:04:56.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.802 "dma_device_type": 2 00:04:56.802 } 00:04:56.802 ], 00:04:56.802 "driver_specific": {} 00:04:56.802 }, 00:04:56.802 { 00:04:56.802 "name": "Passthru0", 00:04:56.802 "aliases": [ 00:04:56.802 "85c06910-967d-57ed-8c1e-7061159dec4c" 00:04:56.802 ], 00:04:56.802 "product_name": "passthru", 00:04:56.802 "block_size": 512, 00:04:56.802 "num_blocks": 16384, 00:04:56.802 "uuid": "85c06910-967d-57ed-8c1e-7061159dec4c", 00:04:56.802 "assigned_rate_limits": { 00:04:56.802 "rw_ios_per_sec": 0, 00:04:56.802 "rw_mbytes_per_sec": 0, 00:04:56.802 "r_mbytes_per_sec": 0, 00:04:56.802 "w_mbytes_per_sec": 0 00:04:56.802 }, 00:04:56.802 "claimed": false, 00:04:56.802 "zoned": false, 00:04:56.802 "supported_io_types": { 00:04:56.802 "read": true, 00:04:56.802 "write": true, 00:04:56.802 "unmap": true, 00:04:56.802 "flush": true, 00:04:56.802 "reset": true, 00:04:56.802 "nvme_admin": false, 00:04:56.802 "nvme_io": false, 00:04:56.802 "nvme_io_md": false, 00:04:56.802 "write_zeroes": true, 00:04:56.802 "zcopy": true, 00:04:56.802 "get_zone_info": false, 00:04:56.802 "zone_management": false, 00:04:56.803 "zone_append": false, 00:04:56.803 "compare": false, 00:04:56.803 "compare_and_write": false, 00:04:56.803 "abort": true, 00:04:56.803 "seek_hole": false, 00:04:56.803 "seek_data": false, 00:04:56.803 "copy": true, 00:04:56.803 "nvme_iov_md": false 00:04:56.803 }, 00:04:56.803 "memory_domains": [ 00:04:56.803 { 00:04:56.803 "dma_device_id": "system", 00:04:56.803 "dma_device_type": 1 00:04:56.803 }, 00:04:56.803 { 00:04:56.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.803 "dma_device_type": 2 00:04:56.803 } 00:04:56.803 ], 00:04:56.803 "driver_specific": { 00:04:56.803 "passthru": { 00:04:56.803 "name": "Passthru0", 00:04:56.803 "base_bdev_name": "Malloc0" 00:04:56.803 } 00:04:56.803 } 00:04:56.803 } 00:04:56.803 ]' 00:04:56.803 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.803 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.803 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.803 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.803 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.803 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.803 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.803 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.064 10:59:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.064 00:04:57.064 real 0m0.290s 00:04:57.064 user 0m0.175s 00:04:57.064 sys 0m0.050s 00:04:57.064 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.064 10:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.064 ************************************ 00:04:57.064 END TEST rpc_integrity 00:04:57.064 ************************************ 00:04:57.064 10:59:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:57.064 10:59:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.064 10:59:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.064 10:59:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.064 ************************************ 00:04:57.064 START TEST rpc_plugins 00:04:57.064 ************************************ 00:04:57.064 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:57.064 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:57.064 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.064 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.064 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.064 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:57.064 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:57.064 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.064 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.064 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.064 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:57.064 { 00:04:57.064 "name": "Malloc1", 00:04:57.064 "aliases": [ 00:04:57.064 "204e0f8f-a02c-43f1-9353-c0500fa2397d" 00:04:57.064 ], 00:04:57.064 "product_name": "Malloc disk", 00:04:57.064 "block_size": 4096, 00:04:57.064 "num_blocks": 256, 00:04:57.064 "uuid": "204e0f8f-a02c-43f1-9353-c0500fa2397d", 00:04:57.064 "assigned_rate_limits": { 00:04:57.064 "rw_ios_per_sec": 0, 00:04:57.064 "rw_mbytes_per_sec": 0, 00:04:57.064 "r_mbytes_per_sec": 0, 00:04:57.064 "w_mbytes_per_sec": 0 00:04:57.064 }, 00:04:57.064 "claimed": false, 00:04:57.064 "zoned": false, 00:04:57.064 "supported_io_types": { 00:04:57.064 "read": true, 00:04:57.064 "write": true, 00:04:57.065 "unmap": true, 00:04:57.065 "flush": true, 00:04:57.065 "reset": true, 00:04:57.065 "nvme_admin": false, 00:04:57.065 "nvme_io": false, 00:04:57.065 "nvme_io_md": false, 00:04:57.065 "write_zeroes": true, 00:04:57.065 "zcopy": true, 00:04:57.065 "get_zone_info": false, 00:04:57.065 "zone_management": false, 00:04:57.065 "zone_append": false, 00:04:57.065 "compare": false, 00:04:57.065 "compare_and_write": false, 00:04:57.065 "abort": true, 00:04:57.065 "seek_hole": false, 00:04:57.065 "seek_data": false, 00:04:57.065 "copy": true, 00:04:57.065 "nvme_iov_md": false 00:04:57.065 }, 00:04:57.065 "memory_domains": [ 00:04:57.065 { 00:04:57.065 "dma_device_id": "system", 00:04:57.065 "dma_device_type": 1 00:04:57.065 }, 00:04:57.065 { 00:04:57.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.065 "dma_device_type": 2 00:04:57.065 } 00:04:57.065 ], 00:04:57.065 "driver_specific": {} 00:04:57.065 } 00:04:57.065 ]' 00:04:57.065 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:57.065 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:57.065 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:57.065 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.065 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.065 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.065 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:57.065 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.065 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.065 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.065 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:57.065 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:57.065 10:59:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:57.065 00:04:57.065 real 0m0.151s 00:04:57.065 user 0m0.093s 00:04:57.065 sys 0m0.022s 00:04:57.065 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.065 10:59:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.065 ************************************ 00:04:57.065 END TEST rpc_plugins 00:04:57.065 ************************************ 00:04:57.326 10:59:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:57.326 10:59:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.326 10:59:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.326 10:59:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.326 ************************************ 00:04:57.326 START TEST rpc_trace_cmd_test 00:04:57.326 ************************************ 00:04:57.326 10:59:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:57.326 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:57.326 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:57.326 10:59:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.326 10:59:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.326 10:59:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.326 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:57.326 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3878494", 00:04:57.326 "tpoint_group_mask": "0x8", 00:04:57.326 "iscsi_conn": { 00:04:57.326 "mask": "0x2", 00:04:57.326 "tpoint_mask": "0x0" 00:04:57.326 }, 00:04:57.326 "scsi": { 00:04:57.326 "mask": "0x4", 00:04:57.326 "tpoint_mask": "0x0" 00:04:57.326 }, 00:04:57.326 "bdev": { 00:04:57.326 "mask": "0x8", 00:04:57.326 "tpoint_mask": "0xffffffffffffffff" 00:04:57.326 }, 00:04:57.326 "nvmf_rdma": { 00:04:57.326 "mask": "0x10", 00:04:57.326 "tpoint_mask": "0x0" 00:04:57.326 }, 00:04:57.326 "nvmf_tcp": { 00:04:57.326 "mask": "0x20", 00:04:57.326 "tpoint_mask": "0x0" 00:04:57.326 }, 00:04:57.327 "ftl": { 00:04:57.327 "mask": "0x40", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "blobfs": { 00:04:57.327 "mask": "0x80", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "dsa": { 00:04:57.327 "mask": "0x200", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "thread": { 00:04:57.327 "mask": "0x400", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "nvme_pcie": { 00:04:57.327 "mask": "0x800", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "iaa": { 00:04:57.327 "mask": "0x1000", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "nvme_tcp": { 00:04:57.327 "mask": "0x2000", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "bdev_nvme": { 00:04:57.327 "mask": "0x4000", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "sock": { 00:04:57.327 "mask": "0x8000", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "blob": { 00:04:57.327 "mask": "0x10000", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "bdev_raid": { 00:04:57.327 "mask": "0x20000", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 }, 00:04:57.327 "scheduler": { 00:04:57.327 "mask": "0x40000", 00:04:57.327 "tpoint_mask": "0x0" 00:04:57.327 } 00:04:57.327 }' 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:57.327 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:57.589 10:59:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:57.589 00:04:57.589 real 0m0.214s 00:04:57.589 user 0m0.173s 00:04:57.589 sys 0m0.033s 00:04:57.589 10:59:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.589 10:59:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.589 ************************************ 00:04:57.589 END TEST rpc_trace_cmd_test 00:04:57.589 ************************************ 00:04:57.589 10:59:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:57.589 10:59:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:57.589 10:59:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:57.589 10:59:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.589 10:59:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.589 10:59:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.589 ************************************ 00:04:57.589 START TEST rpc_daemon_integrity 00:04:57.589 ************************************ 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.589 { 00:04:57.589 "name": "Malloc2", 00:04:57.589 "aliases": [ 00:04:57.589 "618956a6-bf8a-4e78-bbc7-06e1cc01f2ac" 00:04:57.589 ], 00:04:57.589 "product_name": "Malloc disk", 00:04:57.589 "block_size": 512, 00:04:57.589 "num_blocks": 16384, 00:04:57.589 "uuid": "618956a6-bf8a-4e78-bbc7-06e1cc01f2ac", 00:04:57.589 "assigned_rate_limits": { 00:04:57.589 "rw_ios_per_sec": 0, 00:04:57.589 "rw_mbytes_per_sec": 0, 00:04:57.589 "r_mbytes_per_sec": 0, 00:04:57.589 "w_mbytes_per_sec": 0 00:04:57.589 }, 00:04:57.589 "claimed": false, 00:04:57.589 "zoned": false, 00:04:57.589 "supported_io_types": { 00:04:57.589 "read": true, 00:04:57.589 "write": true, 00:04:57.589 "unmap": true, 00:04:57.589 "flush": true, 00:04:57.589 "reset": true, 00:04:57.589 "nvme_admin": false, 00:04:57.589 "nvme_io": false, 00:04:57.589 "nvme_io_md": false, 00:04:57.589 "write_zeroes": true, 00:04:57.589 "zcopy": true, 00:04:57.589 "get_zone_info": false, 00:04:57.589 "zone_management": false, 00:04:57.589 "zone_append": false, 00:04:57.589 "compare": false, 00:04:57.589 "compare_and_write": false, 00:04:57.589 "abort": true, 00:04:57.589 "seek_hole": false, 00:04:57.589 "seek_data": false, 00:04:57.589 "copy": true, 00:04:57.589 "nvme_iov_md": false 00:04:57.589 }, 00:04:57.589 "memory_domains": [ 00:04:57.589 { 00:04:57.589 "dma_device_id": "system", 00:04:57.589 "dma_device_type": 1 00:04:57.589 }, 00:04:57.589 { 00:04:57.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.589 "dma_device_type": 2 00:04:57.589 } 00:04:57.589 ], 00:04:57.589 "driver_specific": {} 00:04:57.589 } 00:04:57.589 ]' 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.589 [2024-11-19 10:59:05.910721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:57.589 [2024-11-19 10:59:05.910752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.589 [2024-11-19 10:59:05.910765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8e4380 00:04:57.589 [2024-11-19 10:59:05.910772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.589 [2024-11-19 10:59:05.912036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.589 [2024-11-19 10:59:05.912056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.589 Passthru0 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.589 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.850 { 00:04:57.850 "name": "Malloc2", 00:04:57.850 "aliases": [ 00:04:57.850 "618956a6-bf8a-4e78-bbc7-06e1cc01f2ac" 00:04:57.850 ], 00:04:57.850 "product_name": "Malloc disk", 00:04:57.850 "block_size": 512, 00:04:57.850 "num_blocks": 16384, 00:04:57.850 "uuid": "618956a6-bf8a-4e78-bbc7-06e1cc01f2ac", 00:04:57.850 "assigned_rate_limits": { 00:04:57.850 "rw_ios_per_sec": 0, 00:04:57.850 "rw_mbytes_per_sec": 0, 00:04:57.850 "r_mbytes_per_sec": 0, 00:04:57.850 "w_mbytes_per_sec": 0 00:04:57.850 }, 00:04:57.850 "claimed": true, 00:04:57.850 "claim_type": "exclusive_write", 00:04:57.850 "zoned": false, 00:04:57.850 "supported_io_types": { 00:04:57.850 "read": true, 00:04:57.850 "write": true, 00:04:57.850 "unmap": true, 00:04:57.850 "flush": true, 00:04:57.850 "reset": true, 00:04:57.850 "nvme_admin": false, 00:04:57.850 "nvme_io": false, 00:04:57.850 "nvme_io_md": false, 00:04:57.850 "write_zeroes": true, 00:04:57.850 "zcopy": true, 00:04:57.850 "get_zone_info": false, 00:04:57.850 "zone_management": false, 00:04:57.850 "zone_append": false, 00:04:57.850 "compare": false, 00:04:57.850 "compare_and_write": false, 00:04:57.850 "abort": true, 00:04:57.850 "seek_hole": false, 00:04:57.850 "seek_data": false, 00:04:57.850 "copy": true, 00:04:57.850 "nvme_iov_md": false 00:04:57.850 }, 00:04:57.850 "memory_domains": [ 00:04:57.850 { 00:04:57.850 "dma_device_id": "system", 00:04:57.850 "dma_device_type": 1 00:04:57.850 }, 00:04:57.850 { 00:04:57.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.850 "dma_device_type": 2 00:04:57.850 } 00:04:57.850 ], 00:04:57.850 "driver_specific": {} 00:04:57.850 }, 00:04:57.850 { 00:04:57.850 "name": "Passthru0", 00:04:57.850 "aliases": [ 00:04:57.850 "f5e5c10e-7222-579d-ba3f-98e0224b4fa5" 00:04:57.850 ], 00:04:57.850 "product_name": "passthru", 00:04:57.850 "block_size": 512, 00:04:57.850 "num_blocks": 16384, 00:04:57.850 "uuid": "f5e5c10e-7222-579d-ba3f-98e0224b4fa5", 00:04:57.850 "assigned_rate_limits": { 00:04:57.850 "rw_ios_per_sec": 0, 00:04:57.850 "rw_mbytes_per_sec": 0, 00:04:57.850 "r_mbytes_per_sec": 0, 00:04:57.850 "w_mbytes_per_sec": 0 00:04:57.850 }, 00:04:57.850 "claimed": false, 00:04:57.850 "zoned": false, 00:04:57.850 "supported_io_types": { 00:04:57.850 "read": true, 00:04:57.850 "write": true, 00:04:57.850 "unmap": true, 00:04:57.850 "flush": true, 00:04:57.850 "reset": true, 00:04:57.850 "nvme_admin": false, 00:04:57.850 "nvme_io": false, 00:04:57.850 "nvme_io_md": false, 00:04:57.850 "write_zeroes": true, 00:04:57.850 "zcopy": true, 00:04:57.850 "get_zone_info": false, 00:04:57.850 "zone_management": false, 00:04:57.850 "zone_append": false, 00:04:57.850 "compare": false, 00:04:57.850 "compare_and_write": false, 00:04:57.850 "abort": true, 00:04:57.850 "seek_hole": false, 00:04:57.850 "seek_data": false, 00:04:57.850 "copy": true, 00:04:57.850 "nvme_iov_md": false 00:04:57.850 }, 00:04:57.850 "memory_domains": [ 00:04:57.850 { 00:04:57.850 "dma_device_id": "system", 00:04:57.850 "dma_device_type": 1 00:04:57.850 }, 00:04:57.850 { 00:04:57.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.850 "dma_device_type": 2 00:04:57.850 } 00:04:57.850 ], 00:04:57.850 "driver_specific": { 00:04:57.850 "passthru": { 00:04:57.850 "name": "Passthru0", 00:04:57.850 "base_bdev_name": "Malloc2" 00:04:57.850 } 00:04:57.850 } 00:04:57.850 } 00:04:57.850 ]' 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.850 10:59:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.850 00:04:57.850 real 0m0.303s 00:04:57.850 user 0m0.189s 00:04:57.850 sys 0m0.050s 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.850 10:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.850 ************************************ 00:04:57.850 END TEST rpc_daemon_integrity 00:04:57.850 ************************************ 00:04:57.850 10:59:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:57.850 10:59:06 rpc -- rpc/rpc.sh@84 -- # killprocess 3878494 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 3878494 ']' 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@958 -- # kill -0 3878494 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@959 -- # uname 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3878494 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3878494' 00:04:57.850 killing process with pid 3878494 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@973 -- # kill 3878494 00:04:57.850 10:59:06 rpc -- common/autotest_common.sh@978 -- # wait 3878494 00:04:58.111 00:04:58.111 real 0m2.591s 00:04:58.111 user 0m3.327s 00:04:58.111 sys 0m0.780s 00:04:58.111 10:59:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.111 10:59:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.111 ************************************ 00:04:58.111 END TEST rpc 00:04:58.111 ************************************ 00:04:58.111 10:59:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:58.111 10:59:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.111 10:59:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.111 10:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.111 ************************************ 00:04:58.111 START TEST skip_rpc 00:04:58.111 ************************************ 00:04:58.111 10:59:06 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:58.373 * Looking for test storage... 00:04:58.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.373 10:59:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.373 --rc genhtml_branch_coverage=1 00:04:58.373 --rc genhtml_function_coverage=1 00:04:58.373 --rc genhtml_legend=1 00:04:58.373 --rc geninfo_all_blocks=1 00:04:58.373 --rc geninfo_unexecuted_blocks=1 00:04:58.373 00:04:58.373 ' 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.373 --rc genhtml_branch_coverage=1 00:04:58.373 --rc genhtml_function_coverage=1 00:04:58.373 --rc genhtml_legend=1 00:04:58.373 --rc geninfo_all_blocks=1 00:04:58.373 --rc geninfo_unexecuted_blocks=1 00:04:58.373 00:04:58.373 ' 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.373 --rc genhtml_branch_coverage=1 00:04:58.373 --rc genhtml_function_coverage=1 00:04:58.373 --rc genhtml_legend=1 00:04:58.373 --rc geninfo_all_blocks=1 00:04:58.373 --rc geninfo_unexecuted_blocks=1 00:04:58.373 00:04:58.373 ' 00:04:58.373 10:59:06 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.374 --rc genhtml_branch_coverage=1 00:04:58.374 --rc genhtml_function_coverage=1 00:04:58.374 --rc genhtml_legend=1 00:04:58.374 --rc geninfo_all_blocks=1 00:04:58.374 --rc geninfo_unexecuted_blocks=1 00:04:58.374 00:04:58.374 ' 00:04:58.374 10:59:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.374 10:59:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.374 10:59:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:58.374 10:59:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.374 10:59:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.374 10:59:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.374 ************************************ 00:04:58.374 START TEST skip_rpc 00:04:58.374 ************************************ 00:04:58.374 10:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:58.374 10:59:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3879165 00:04:58.374 10:59:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.374 10:59:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:58.374 10:59:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:58.635 [2024-11-19 10:59:06.746033] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:04:58.635 [2024-11-19 10:59:06.746083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879165 ] 00:04:58.635 [2024-11-19 10:59:06.826065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.635 [2024-11-19 10:59:06.861991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3879165 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3879165 ']' 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3879165 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3879165 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3879165' 00:05:03.957 killing process with pid 3879165 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3879165 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3879165 00:05:03.957 00:05:03.957 real 0m5.287s 00:05:03.957 user 0m5.086s 00:05:03.957 sys 0m0.254s 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.957 10:59:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.957 ************************************ 00:05:03.957 END TEST skip_rpc 00:05:03.957 ************************************ 00:05:03.957 10:59:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:03.957 10:59:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.957 10:59:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.957 10:59:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.957 ************************************ 00:05:03.957 START TEST skip_rpc_with_json 00:05:03.957 ************************************ 00:05:03.957 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3880222 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3880222 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3880222 ']' 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.958 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.958 [2024-11-19 10:59:12.112665] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:03.958 [2024-11-19 10:59:12.112727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880222 ] 00:05:03.958 [2024-11-19 10:59:12.197060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.958 [2024-11-19 10:59:12.238524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.902 [2024-11-19 10:59:12.922039] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:04.902 request: 00:05:04.902 { 00:05:04.902 "trtype": "tcp", 00:05:04.902 "method": "nvmf_get_transports", 00:05:04.902 "req_id": 1 00:05:04.902 } 00:05:04.902 Got JSON-RPC error response 00:05:04.902 response: 00:05:04.902 { 00:05:04.902 "code": -19, 00:05:04.902 "message": "No such device" 00:05:04.902 } 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.902 [2024-11-19 10:59:12.934153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.902 10:59:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.902 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.902 10:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.902 { 00:05:04.902 "subsystems": [ 00:05:04.902 { 00:05:04.902 "subsystem": "fsdev", 00:05:04.902 "config": [ 00:05:04.902 { 00:05:04.902 "method": "fsdev_set_opts", 00:05:04.902 "params": { 00:05:04.902 "fsdev_io_pool_size": 65535, 00:05:04.902 "fsdev_io_cache_size": 256 00:05:04.902 } 00:05:04.902 } 00:05:04.902 ] 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "subsystem": "vfio_user_target", 00:05:04.902 "config": null 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "subsystem": "keyring", 00:05:04.902 "config": [] 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "subsystem": "iobuf", 00:05:04.902 "config": [ 00:05:04.902 { 00:05:04.902 "method": "iobuf_set_options", 00:05:04.902 "params": { 00:05:04.902 "small_pool_count": 8192, 00:05:04.902 "large_pool_count": 1024, 00:05:04.902 "small_bufsize": 8192, 00:05:04.902 "large_bufsize": 135168, 00:05:04.902 "enable_numa": false 00:05:04.902 } 00:05:04.902 } 00:05:04.902 ] 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "subsystem": "sock", 00:05:04.902 "config": [ 00:05:04.902 { 00:05:04.902 "method": "sock_set_default_impl", 00:05:04.902 "params": { 00:05:04.902 "impl_name": "posix" 00:05:04.902 } 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "method": "sock_impl_set_options", 00:05:04.902 "params": { 00:05:04.902 "impl_name": "ssl", 00:05:04.902 "recv_buf_size": 4096, 00:05:04.902 "send_buf_size": 4096, 00:05:04.902 "enable_recv_pipe": true, 00:05:04.902 "enable_quickack": false, 00:05:04.902 "enable_placement_id": 0, 00:05:04.902 "enable_zerocopy_send_server": true, 00:05:04.902 "enable_zerocopy_send_client": false, 00:05:04.902 "zerocopy_threshold": 0, 00:05:04.902 "tls_version": 0, 00:05:04.902 "enable_ktls": false 00:05:04.902 } 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "method": "sock_impl_set_options", 00:05:04.902 "params": { 00:05:04.902 "impl_name": "posix", 00:05:04.902 "recv_buf_size": 2097152, 00:05:04.902 "send_buf_size": 2097152, 00:05:04.902 "enable_recv_pipe": true, 00:05:04.902 "enable_quickack": false, 00:05:04.902 "enable_placement_id": 0, 00:05:04.902 "enable_zerocopy_send_server": true, 00:05:04.902 "enable_zerocopy_send_client": false, 00:05:04.902 "zerocopy_threshold": 0, 00:05:04.902 "tls_version": 0, 00:05:04.902 "enable_ktls": false 00:05:04.902 } 00:05:04.902 } 00:05:04.902 ] 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "subsystem": "vmd", 00:05:04.902 "config": [] 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "subsystem": "accel", 00:05:04.902 "config": [ 00:05:04.902 { 00:05:04.902 "method": "accel_set_options", 00:05:04.902 "params": { 00:05:04.902 "small_cache_size": 128, 00:05:04.902 "large_cache_size": 16, 00:05:04.902 "task_count": 2048, 00:05:04.902 "sequence_count": 2048, 00:05:04.902 "buf_count": 2048 00:05:04.902 } 00:05:04.902 } 00:05:04.902 ] 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "subsystem": "bdev", 00:05:04.902 "config": [ 00:05:04.902 { 00:05:04.902 "method": "bdev_set_options", 00:05:04.902 "params": { 00:05:04.902 "bdev_io_pool_size": 65535, 00:05:04.902 "bdev_io_cache_size": 256, 00:05:04.902 "bdev_auto_examine": true, 00:05:04.902 "iobuf_small_cache_size": 128, 00:05:04.902 "iobuf_large_cache_size": 16 00:05:04.902 } 00:05:04.902 }, 00:05:04.902 { 00:05:04.902 "method": "bdev_raid_set_options", 00:05:04.902 "params": { 00:05:04.902 "process_window_size_kb": 1024, 00:05:04.902 "process_max_bandwidth_mb_sec": 0 00:05:04.903 } 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "method": "bdev_iscsi_set_options", 00:05:04.903 "params": { 00:05:04.903 "timeout_sec": 30 00:05:04.903 } 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "method": "bdev_nvme_set_options", 00:05:04.903 "params": { 00:05:04.903 "action_on_timeout": "none", 00:05:04.903 "timeout_us": 0, 00:05:04.903 "timeout_admin_us": 0, 00:05:04.903 "keep_alive_timeout_ms": 10000, 00:05:04.903 "arbitration_burst": 0, 00:05:04.903 "low_priority_weight": 0, 00:05:04.903 "medium_priority_weight": 0, 00:05:04.903 "high_priority_weight": 0, 00:05:04.903 "nvme_adminq_poll_period_us": 10000, 00:05:04.903 "nvme_ioq_poll_period_us": 0, 00:05:04.903 "io_queue_requests": 0, 00:05:04.903 "delay_cmd_submit": true, 00:05:04.903 "transport_retry_count": 4, 00:05:04.903 "bdev_retry_count": 3, 00:05:04.903 "transport_ack_timeout": 0, 00:05:04.903 "ctrlr_loss_timeout_sec": 0, 00:05:04.903 "reconnect_delay_sec": 0, 00:05:04.903 "fast_io_fail_timeout_sec": 0, 00:05:04.903 "disable_auto_failback": false, 00:05:04.903 "generate_uuids": false, 00:05:04.903 "transport_tos": 0, 00:05:04.903 "nvme_error_stat": false, 00:05:04.903 "rdma_srq_size": 0, 00:05:04.903 "io_path_stat": false, 00:05:04.903 "allow_accel_sequence": false, 00:05:04.903 "rdma_max_cq_size": 0, 00:05:04.903 "rdma_cm_event_timeout_ms": 0, 00:05:04.903 "dhchap_digests": [ 00:05:04.903 "sha256", 00:05:04.903 "sha384", 00:05:04.903 "sha512" 00:05:04.903 ], 00:05:04.903 "dhchap_dhgroups": [ 00:05:04.903 "null", 00:05:04.903 "ffdhe2048", 00:05:04.903 "ffdhe3072", 00:05:04.903 "ffdhe4096", 00:05:04.903 "ffdhe6144", 00:05:04.903 "ffdhe8192" 00:05:04.903 ] 00:05:04.903 } 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "method": "bdev_nvme_set_hotplug", 00:05:04.903 "params": { 00:05:04.903 "period_us": 100000, 00:05:04.903 "enable": false 00:05:04.903 } 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "method": "bdev_wait_for_examine" 00:05:04.903 } 00:05:04.903 ] 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "subsystem": "scsi", 00:05:04.903 "config": null 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "subsystem": "scheduler", 00:05:04.903 "config": [ 00:05:04.903 { 00:05:04.903 "method": "framework_set_scheduler", 00:05:04.903 "params": { 00:05:04.903 "name": "static" 00:05:04.903 } 00:05:04.903 } 00:05:04.903 ] 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "subsystem": "vhost_scsi", 00:05:04.903 "config": [] 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "subsystem": "vhost_blk", 00:05:04.903 "config": [] 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "subsystem": "ublk", 00:05:04.903 "config": [] 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "subsystem": "nbd", 00:05:04.903 "config": [] 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "subsystem": "nvmf", 00:05:04.903 "config": [ 00:05:04.903 { 00:05:04.903 "method": "nvmf_set_config", 00:05:04.903 "params": { 00:05:04.903 "discovery_filter": "match_any", 00:05:04.903 "admin_cmd_passthru": { 00:05:04.903 "identify_ctrlr": false 00:05:04.903 }, 00:05:04.903 "dhchap_digests": [ 00:05:04.903 "sha256", 00:05:04.903 "sha384", 00:05:04.903 "sha512" 00:05:04.903 ], 00:05:04.903 "dhchap_dhgroups": [ 00:05:04.903 "null", 00:05:04.903 "ffdhe2048", 00:05:04.903 "ffdhe3072", 00:05:04.903 "ffdhe4096", 00:05:04.903 "ffdhe6144", 00:05:04.903 "ffdhe8192" 00:05:04.903 ] 00:05:04.903 } 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "method": "nvmf_set_max_subsystems", 00:05:04.903 "params": { 00:05:04.903 "max_subsystems": 1024 00:05:04.903 } 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "method": "nvmf_set_crdt", 00:05:04.903 "params": { 00:05:04.903 "crdt1": 0, 00:05:04.903 "crdt2": 0, 00:05:04.903 "crdt3": 0 00:05:04.903 } 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "method": "nvmf_create_transport", 00:05:04.903 "params": { 00:05:04.903 "trtype": "TCP", 00:05:04.903 "max_queue_depth": 128, 00:05:04.903 "max_io_qpairs_per_ctrlr": 127, 00:05:04.903 "in_capsule_data_size": 4096, 00:05:04.903 "max_io_size": 131072, 00:05:04.903 "io_unit_size": 131072, 00:05:04.903 "max_aq_depth": 128, 00:05:04.903 "num_shared_buffers": 511, 00:05:04.903 "buf_cache_size": 4294967295, 00:05:04.903 "dif_insert_or_strip": false, 00:05:04.903 "zcopy": false, 00:05:04.903 "c2h_success": true, 00:05:04.903 "sock_priority": 0, 00:05:04.903 "abort_timeout_sec": 1, 00:05:04.903 "ack_timeout": 0, 00:05:04.903 "data_wr_pool_size": 0 00:05:04.903 } 00:05:04.903 } 00:05:04.903 ] 00:05:04.903 }, 00:05:04.903 { 00:05:04.903 "subsystem": "iscsi", 00:05:04.903 "config": [ 00:05:04.903 { 00:05:04.903 "method": "iscsi_set_options", 00:05:04.903 "params": { 00:05:04.903 "node_base": "iqn.2016-06.io.spdk", 00:05:04.903 "max_sessions": 128, 00:05:04.903 "max_connections_per_session": 2, 00:05:04.903 "max_queue_depth": 64, 00:05:04.903 "default_time2wait": 2, 00:05:04.903 "default_time2retain": 20, 00:05:04.903 "first_burst_length": 8192, 00:05:04.903 "immediate_data": true, 00:05:04.903 "allow_duplicated_isid": false, 00:05:04.903 "error_recovery_level": 0, 00:05:04.903 "nop_timeout": 60, 00:05:04.903 "nop_in_interval": 30, 00:05:04.903 "disable_chap": false, 00:05:04.903 "require_chap": false, 00:05:04.903 "mutual_chap": false, 00:05:04.903 "chap_group": 0, 00:05:04.903 "max_large_datain_per_connection": 64, 00:05:04.903 "max_r2t_per_connection": 4, 00:05:04.903 "pdu_pool_size": 36864, 00:05:04.903 "immediate_data_pool_size": 16384, 00:05:04.903 "data_out_pool_size": 2048 00:05:04.903 } 00:05:04.903 } 00:05:04.903 ] 00:05:04.903 } 00:05:04.903 ] 00:05:04.903 } 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3880222 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3880222 ']' 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3880222 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3880222 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3880222' 00:05:04.903 killing process with pid 3880222 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3880222 00:05:04.903 10:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3880222 00:05:05.164 10:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3880546 00:05:05.164 10:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:05.164 10:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3880546 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3880546 ']' 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3880546 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3880546 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3880546' 00:05:10.455 killing process with pid 3880546 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3880546 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3880546 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:10.455 00:05:10.455 real 0m6.615s 00:05:10.455 user 0m6.517s 00:05:10.455 sys 0m0.569s 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.455 ************************************ 00:05:10.455 END TEST skip_rpc_with_json 00:05:10.455 ************************************ 00:05:10.455 10:59:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:10.455 10:59:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.455 10:59:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.455 10:59:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.455 ************************************ 00:05:10.455 START TEST skip_rpc_with_delay 00:05:10.455 ************************************ 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:10.455 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.455 [2024-11-19 10:59:18.806489] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:10.716 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:10.716 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.716 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.716 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.716 00:05:10.716 real 0m0.078s 00:05:10.716 user 0m0.046s 00:05:10.716 sys 0m0.032s 00:05:10.716 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.716 10:59:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:10.716 ************************************ 00:05:10.716 END TEST skip_rpc_with_delay 00:05:10.716 ************************************ 00:05:10.716 10:59:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:10.716 10:59:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:10.716 10:59:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:10.716 10:59:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.716 10:59:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.716 10:59:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.716 ************************************ 00:05:10.716 START TEST exit_on_failed_rpc_init 00:05:10.716 ************************************ 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3881738 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3881738 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3881738 ']' 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.716 10:59:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.716 [2024-11-19 10:59:18.965781] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:10.716 [2024-11-19 10:59:18.965841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881738 ] 00:05:10.717 [2024-11-19 10:59:19.049676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.977 [2024-11-19 10:59:19.092019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:11.551 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.551 [2024-11-19 10:59:19.815688] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:11.551 [2024-11-19 10:59:19.815741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881947 ] 00:05:11.812 [2024-11-19 10:59:19.910754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.812 [2024-11-19 10:59:19.946608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.812 [2024-11-19 10:59:19.946659] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:11.812 [2024-11-19 10:59:19.946669] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:11.812 [2024-11-19 10:59:19.946676] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3881738 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3881738 ']' 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3881738 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.812 10:59:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3881738 00:05:11.812 10:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.812 10:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.812 10:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3881738' 00:05:11.812 killing process with pid 3881738 00:05:11.812 10:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3881738 00:05:11.812 10:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3881738 00:05:12.073 00:05:12.073 real 0m1.355s 00:05:12.073 user 0m1.579s 00:05:12.073 sys 0m0.389s 00:05:12.073 10:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.073 10:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.073 ************************************ 00:05:12.073 END TEST exit_on_failed_rpc_init 00:05:12.073 ************************************ 00:05:12.073 10:59:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.073 00:05:12.073 real 0m13.855s 00:05:12.073 user 0m13.454s 00:05:12.073 sys 0m1.567s 00:05:12.073 10:59:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.073 10:59:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.073 ************************************ 00:05:12.073 END TEST skip_rpc 00:05:12.073 ************************************ 00:05:12.073 10:59:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:12.073 10:59:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.073 10:59:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.073 10:59:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.073 ************************************ 00:05:12.073 START TEST rpc_client 00:05:12.073 ************************************ 00:05:12.073 10:59:20 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:12.335 * Looking for test storage... 00:05:12.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.335 10:59:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.335 --rc genhtml_branch_coverage=1 00:05:12.335 --rc genhtml_function_coverage=1 00:05:12.335 --rc genhtml_legend=1 00:05:12.335 --rc geninfo_all_blocks=1 00:05:12.335 --rc geninfo_unexecuted_blocks=1 00:05:12.335 00:05:12.335 ' 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.335 --rc genhtml_branch_coverage=1 00:05:12.335 --rc genhtml_function_coverage=1 00:05:12.335 --rc genhtml_legend=1 00:05:12.335 --rc geninfo_all_blocks=1 00:05:12.335 --rc geninfo_unexecuted_blocks=1 00:05:12.335 00:05:12.335 ' 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.335 --rc genhtml_branch_coverage=1 00:05:12.335 --rc genhtml_function_coverage=1 00:05:12.335 --rc genhtml_legend=1 00:05:12.335 --rc geninfo_all_blocks=1 00:05:12.335 --rc geninfo_unexecuted_blocks=1 00:05:12.335 00:05:12.335 ' 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.335 --rc genhtml_branch_coverage=1 00:05:12.335 --rc genhtml_function_coverage=1 00:05:12.335 --rc genhtml_legend=1 00:05:12.335 --rc geninfo_all_blocks=1 00:05:12.335 --rc geninfo_unexecuted_blocks=1 00:05:12.335 00:05:12.335 ' 00:05:12.335 10:59:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:12.335 OK 00:05:12.335 10:59:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:12.335 00:05:12.335 real 0m0.224s 00:05:12.335 user 0m0.140s 00:05:12.335 sys 0m0.096s 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.335 10:59:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:12.335 ************************************ 00:05:12.335 END TEST rpc_client 00:05:12.335 ************************************ 00:05:12.335 10:59:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:12.335 10:59:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.335 10:59:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.335 10:59:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.335 ************************************ 00:05:12.335 START TEST json_config 00:05:12.335 ************************************ 00:05:12.335 10:59:20 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:12.598 10:59:20 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.598 10:59:20 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.598 10:59:20 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.598 10:59:20 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.598 10:59:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.598 10:59:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.598 10:59:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.598 10:59:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.598 10:59:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.598 10:59:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.598 10:59:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.598 10:59:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.598 10:59:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.598 10:59:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.598 10:59:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.598 10:59:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:12.598 10:59:20 json_config -- scripts/common.sh@345 -- # : 1 00:05:12.598 10:59:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.598 10:59:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.598 10:59:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:12.598 10:59:20 json_config -- scripts/common.sh@353 -- # local d=1 00:05:12.598 10:59:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.598 10:59:20 json_config -- scripts/common.sh@355 -- # echo 1 00:05:12.598 10:59:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.598 10:59:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:12.598 10:59:20 json_config -- scripts/common.sh@353 -- # local d=2 00:05:12.598 10:59:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.598 10:59:20 json_config -- scripts/common.sh@355 -- # echo 2 00:05:12.598 10:59:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.598 10:59:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.598 10:59:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.598 10:59:20 json_config -- scripts/common.sh@368 -- # return 0 00:05:12.598 10:59:20 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.598 10:59:20 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.598 --rc genhtml_branch_coverage=1 00:05:12.598 --rc genhtml_function_coverage=1 00:05:12.598 --rc genhtml_legend=1 00:05:12.598 --rc geninfo_all_blocks=1 00:05:12.598 --rc geninfo_unexecuted_blocks=1 00:05:12.598 00:05:12.598 ' 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.599 --rc genhtml_branch_coverage=1 00:05:12.599 --rc genhtml_function_coverage=1 00:05:12.599 --rc genhtml_legend=1 00:05:12.599 --rc geninfo_all_blocks=1 00:05:12.599 --rc geninfo_unexecuted_blocks=1 00:05:12.599 00:05:12.599 ' 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.599 --rc genhtml_branch_coverage=1 00:05:12.599 --rc genhtml_function_coverage=1 00:05:12.599 --rc genhtml_legend=1 00:05:12.599 --rc geninfo_all_blocks=1 00:05:12.599 --rc geninfo_unexecuted_blocks=1 00:05:12.599 00:05:12.599 ' 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.599 --rc genhtml_branch_coverage=1 00:05:12.599 --rc genhtml_function_coverage=1 00:05:12.599 --rc genhtml_legend=1 00:05:12.599 --rc geninfo_all_blocks=1 00:05:12.599 --rc geninfo_unexecuted_blocks=1 00:05:12.599 00:05:12.599 ' 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.599 10:59:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.599 10:59:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.599 10:59:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.599 10:59:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.599 10:59:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.599 10:59:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.599 10:59:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.599 10:59:20 json_config -- paths/export.sh@5 -- # export PATH 00:05:12.599 10:59:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@51 -- # : 0 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.599 10:59:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:12.599 INFO: JSON configuration test init 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.599 10:59:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:12.599 10:59:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:12.599 10:59:20 json_config -- json_config/common.sh@10 -- # shift 00:05:12.599 10:59:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.599 10:59:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.599 10:59:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.599 10:59:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.599 10:59:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.599 10:59:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3882352 00:05:12.599 10:59:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.599 Waiting for target to run... 00:05:12.599 10:59:20 json_config -- json_config/common.sh@25 -- # waitforlisten 3882352 /var/tmp/spdk_tgt.sock 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 3882352 ']' 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.599 10:59:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.599 10:59:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.860 [2024-11-19 10:59:20.978577] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:12.860 [2024-11-19 10:59:20.978656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882352 ] 00:05:13.120 [2024-11-19 10:59:21.283677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.120 [2024-11-19 10:59:21.314954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.693 10:59:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.693 10:59:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:13.693 10:59:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.693 00:05:13.693 10:59:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:13.693 10:59:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:13.693 10:59:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.693 10:59:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.693 10:59:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:13.693 10:59:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:13.693 10:59:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.693 10:59:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.693 10:59:21 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:13.693 10:59:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:13.693 10:59:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:14.265 10:59:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.265 10:59:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:14.265 10:59:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@54 -- # sort 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:14.265 10:59:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.265 10:59:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:14.265 10:59:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:14.266 10:59:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:14.266 10:59:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:14.266 10:59:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:14.266 10:59:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:14.266 10:59:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.266 10:59:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.266 10:59:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:14.266 10:59:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:14.266 10:59:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:14.266 10:59:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.266 10:59:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.526 MallocForNvmf0 00:05:14.526 10:59:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.526 10:59:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.787 MallocForNvmf1 00:05:14.787 10:59:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.787 10:59:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.787 [2024-11-19 10:59:23.077673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.787 10:59:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:14.787 10:59:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.048 10:59:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.048 10:59:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.309 10:59:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.309 10:59:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.309 10:59:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.309 10:59:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.570 [2024-11-19 10:59:23.731844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:15.570 10:59:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:15.570 10:59:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.570 10:59:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.570 10:59:23 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:15.570 10:59:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.570 10:59:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.570 10:59:23 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:15.570 10:59:23 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.570 10:59:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.830 MallocBdevForConfigChangeCheck 00:05:15.830 10:59:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:15.830 10:59:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.831 10:59:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.831 10:59:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:15.831 10:59:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.092 10:59:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:16.092 INFO: shutting down applications... 00:05:16.092 10:59:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:16.092 10:59:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:16.092 10:59:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:16.092 10:59:24 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:16.665 Calling clear_iscsi_subsystem 00:05:16.665 Calling clear_nvmf_subsystem 00:05:16.665 Calling clear_nbd_subsystem 00:05:16.665 Calling clear_ublk_subsystem 00:05:16.665 Calling clear_vhost_blk_subsystem 00:05:16.665 Calling clear_vhost_scsi_subsystem 00:05:16.665 Calling clear_bdev_subsystem 00:05:16.665 10:59:24 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:16.665 10:59:24 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:16.665 10:59:24 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:16.665 10:59:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.665 10:59:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:16.665 10:59:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:16.926 10:59:25 json_config -- json_config/json_config.sh@352 -- # break 00:05:16.926 10:59:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:16.926 10:59:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:16.926 10:59:25 json_config -- json_config/common.sh@31 -- # local app=target 00:05:16.926 10:59:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.926 10:59:25 json_config -- json_config/common.sh@35 -- # [[ -n 3882352 ]] 00:05:16.926 10:59:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3882352 00:05:16.926 10:59:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.926 10:59:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.926 10:59:25 json_config -- json_config/common.sh@41 -- # kill -0 3882352 00:05:16.926 10:59:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.497 10:59:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.497 10:59:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.497 10:59:25 json_config -- json_config/common.sh@41 -- # kill -0 3882352 00:05:17.497 10:59:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.497 10:59:25 json_config -- json_config/common.sh@43 -- # break 00:05:17.497 10:59:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.497 10:59:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.497 SPDK target shutdown done 00:05:17.497 10:59:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:17.497 INFO: relaunching applications... 00:05:17.497 10:59:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.497 10:59:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:17.497 10:59:25 json_config -- json_config/common.sh@10 -- # shift 00:05:17.497 10:59:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.497 10:59:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.497 10:59:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.497 10:59:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.497 10:59:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.497 10:59:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3883352 00:05:17.497 10:59:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.497 Waiting for target to run... 00:05:17.497 10:59:25 json_config -- json_config/common.sh@25 -- # waitforlisten 3883352 /var/tmp/spdk_tgt.sock 00:05:17.497 10:59:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.497 10:59:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 3883352 ']' 00:05:17.497 10:59:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.497 10:59:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.497 10:59:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.497 10:59:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.497 10:59:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.497 [2024-11-19 10:59:25.715096] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:17.497 [2024-11-19 10:59:25.715153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883352 ] 00:05:17.758 [2024-11-19 10:59:26.007899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.758 [2024-11-19 10:59:26.037644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.329 [2024-11-19 10:59:26.555733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.329 [2024-11-19 10:59:26.588113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.329 10:59:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.329 10:59:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:18.329 10:59:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:18.329 00:05:18.329 10:59:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:18.329 10:59:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:18.329 INFO: Checking if target configuration is the same... 00:05:18.329 10:59:26 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.329 10:59:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:18.329 10:59:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.329 + '[' 2 -ne 2 ']' 00:05:18.329 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.329 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.329 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.329 +++ basename /dev/fd/62 00:05:18.329 ++ mktemp /tmp/62.XXX 00:05:18.329 + tmp_file_1=/tmp/62.mYy 00:05:18.329 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.329 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.329 + tmp_file_2=/tmp/spdk_tgt_config.json.81D 00:05:18.329 + ret=0 00:05:18.329 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.900 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.900 + diff -u /tmp/62.mYy /tmp/spdk_tgt_config.json.81D 00:05:18.900 + echo 'INFO: JSON config files are the same' 00:05:18.900 INFO: JSON config files are the same 00:05:18.900 + rm /tmp/62.mYy /tmp/spdk_tgt_config.json.81D 00:05:18.900 + exit 0 00:05:18.900 10:59:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:18.900 10:59:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.900 INFO: changing configuration and checking if this can be detected... 00:05:18.900 10:59:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.900 10:59:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.900 10:59:27 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.900 10:59:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:18.900 10:59:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.900 + '[' 2 -ne 2 ']' 00:05:18.900 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.900 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.900 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.900 +++ basename /dev/fd/62 00:05:18.900 ++ mktemp /tmp/62.XXX 00:05:18.900 + tmp_file_1=/tmp/62.8Si 00:05:18.900 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.900 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.900 + tmp_file_2=/tmp/spdk_tgt_config.json.sZY 00:05:18.900 + ret=0 00:05:18.900 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.472 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.472 + diff -u /tmp/62.8Si /tmp/spdk_tgt_config.json.sZY 00:05:19.472 + ret=1 00:05:19.472 + echo '=== Start of file: /tmp/62.8Si ===' 00:05:19.472 + cat /tmp/62.8Si 00:05:19.472 + echo '=== End of file: /tmp/62.8Si ===' 00:05:19.472 + echo '' 00:05:19.472 + echo '=== Start of file: /tmp/spdk_tgt_config.json.sZY ===' 00:05:19.472 + cat /tmp/spdk_tgt_config.json.sZY 00:05:19.472 + echo '=== End of file: /tmp/spdk_tgt_config.json.sZY ===' 00:05:19.472 + echo '' 00:05:19.472 + rm /tmp/62.8Si /tmp/spdk_tgt_config.json.sZY 00:05:19.472 + exit 1 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:19.472 INFO: configuration change detected. 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@324 -- # [[ -n 3883352 ]] 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.472 10:59:27 json_config -- json_config/json_config.sh@330 -- # killprocess 3883352 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@954 -- # '[' -z 3883352 ']' 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@958 -- # kill -0 3883352 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@959 -- # uname 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3883352 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3883352' 00:05:19.472 killing process with pid 3883352 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@973 -- # kill 3883352 00:05:19.472 10:59:27 json_config -- common/autotest_common.sh@978 -- # wait 3883352 00:05:19.734 10:59:27 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.734 10:59:27 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:19.734 10:59:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.734 10:59:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.734 10:59:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:19.734 10:59:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:19.734 INFO: Success 00:05:19.734 00:05:19.734 real 0m7.352s 00:05:19.734 user 0m8.833s 00:05:19.734 sys 0m1.956s 00:05:19.734 10:59:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.734 10:59:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.734 ************************************ 00:05:19.734 END TEST json_config 00:05:19.734 ************************************ 00:05:19.734 10:59:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.734 10:59:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.734 10:59:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.734 10:59:28 -- common/autotest_common.sh@10 -- # set +x 00:05:19.996 ************************************ 00:05:19.996 START TEST json_config_extra_key 00:05:19.996 ************************************ 00:05:19.996 10:59:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.996 10:59:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.997 10:59:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.997 10:59:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.997 10:59:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:19.997 10:59:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.997 10:59:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.997 --rc genhtml_branch_coverage=1 00:05:19.997 --rc genhtml_function_coverage=1 00:05:19.997 --rc genhtml_legend=1 00:05:19.997 --rc geninfo_all_blocks=1 00:05:19.997 --rc geninfo_unexecuted_blocks=1 00:05:19.997 00:05:19.997 ' 00:05:19.997 10:59:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.997 --rc genhtml_branch_coverage=1 00:05:19.997 --rc genhtml_function_coverage=1 00:05:19.997 --rc genhtml_legend=1 00:05:19.997 --rc geninfo_all_blocks=1 00:05:19.997 --rc geninfo_unexecuted_blocks=1 00:05:19.997 00:05:19.997 ' 00:05:19.997 10:59:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.997 --rc genhtml_branch_coverage=1 00:05:19.997 --rc genhtml_function_coverage=1 00:05:19.997 --rc genhtml_legend=1 00:05:19.997 --rc geninfo_all_blocks=1 00:05:19.997 --rc geninfo_unexecuted_blocks=1 00:05:19.997 00:05:19.997 ' 00:05:19.997 10:59:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.997 --rc genhtml_branch_coverage=1 00:05:19.997 --rc genhtml_function_coverage=1 00:05:19.997 --rc genhtml_legend=1 00:05:19.997 --rc geninfo_all_blocks=1 00:05:19.997 --rc geninfo_unexecuted_blocks=1 00:05:19.997 00:05:19.997 ' 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.997 10:59:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.997 10:59:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.997 10:59:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.997 10:59:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.997 10:59:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.997 10:59:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.997 10:59:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.997 INFO: launching applications... 00:05:19.997 10:59:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.997 10:59:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.997 10:59:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.997 10:59:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.997 10:59:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.998 10:59:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.998 10:59:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.998 10:59:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.998 10:59:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3884021 00:05:19.998 10:59:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.998 Waiting for target to run... 00:05:19.998 10:59:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3884021 /var/tmp/spdk_tgt.sock 00:05:19.998 10:59:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3884021 ']' 00:05:19.998 10:59:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.998 10:59:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.998 10:59:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.998 10:59:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.998 10:59:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.998 10:59:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.259 [2024-11-19 10:59:28.374384] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:20.259 [2024-11-19 10:59:28.374456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884021 ] 00:05:20.520 [2024-11-19 10:59:28.657908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.520 [2024-11-19 10:59:28.687483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.091 10:59:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.091 10:59:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:21.091 00:05:21.091 10:59:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:21.091 INFO: shutting down applications... 00:05:21.091 10:59:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3884021 ]] 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3884021 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3884021 00:05:21.091 10:59:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.352 10:59:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.352 10:59:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.352 10:59:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3884021 00:05:21.352 10:59:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.352 10:59:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.352 10:59:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.352 10:59:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.352 SPDK target shutdown done 00:05:21.352 10:59:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.352 Success 00:05:21.352 00:05:21.352 real 0m1.561s 00:05:21.352 user 0m1.206s 00:05:21.352 sys 0m0.394s 00:05:21.352 10:59:29 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.352 10:59:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.352 ************************************ 00:05:21.352 END TEST json_config_extra_key 00:05:21.352 ************************************ 00:05:21.613 10:59:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.613 10:59:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.613 10:59:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.613 10:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:21.613 ************************************ 00:05:21.613 START TEST alias_rpc 00:05:21.613 ************************************ 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.613 * Looking for test storage... 00:05:21.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.613 10:59:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.613 --rc genhtml_branch_coverage=1 00:05:21.613 --rc genhtml_function_coverage=1 00:05:21.613 --rc genhtml_legend=1 00:05:21.613 --rc geninfo_all_blocks=1 00:05:21.613 --rc geninfo_unexecuted_blocks=1 00:05:21.613 00:05:21.613 ' 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.613 --rc genhtml_branch_coverage=1 00:05:21.613 --rc genhtml_function_coverage=1 00:05:21.613 --rc genhtml_legend=1 00:05:21.613 --rc geninfo_all_blocks=1 00:05:21.613 --rc geninfo_unexecuted_blocks=1 00:05:21.613 00:05:21.613 ' 00:05:21.613 10:59:29 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.613 --rc genhtml_branch_coverage=1 00:05:21.613 --rc genhtml_function_coverage=1 00:05:21.614 --rc genhtml_legend=1 00:05:21.614 --rc geninfo_all_blocks=1 00:05:21.614 --rc geninfo_unexecuted_blocks=1 00:05:21.614 00:05:21.614 ' 00:05:21.614 10:59:29 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.614 --rc genhtml_branch_coverage=1 00:05:21.614 --rc genhtml_function_coverage=1 00:05:21.614 --rc genhtml_legend=1 00:05:21.614 --rc geninfo_all_blocks=1 00:05:21.614 --rc geninfo_unexecuted_blocks=1 00:05:21.614 00:05:21.614 ' 00:05:21.614 10:59:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.614 10:59:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3884414 00:05:21.614 10:59:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3884414 00:05:21.614 10:59:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.614 10:59:29 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3884414 ']' 00:05:21.614 10:59:29 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.614 10:59:29 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.614 10:59:29 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.614 10:59:29 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.614 10:59:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.875 [2024-11-19 10:59:30.002252] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:21.875 [2024-11-19 10:59:30.002325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884414 ] 00:05:21.875 [2024-11-19 10:59:30.090177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.875 [2024-11-19 10:59:30.132944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.817 10:59:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.817 10:59:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:22.817 10:59:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:22.817 10:59:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3884414 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3884414 ']' 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3884414 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3884414 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3884414' 00:05:22.817 killing process with pid 3884414 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 3884414 00:05:22.817 10:59:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 3884414 00:05:23.078 00:05:23.078 real 0m1.539s 00:05:23.078 user 0m1.698s 00:05:23.078 sys 0m0.420s 00:05:23.078 10:59:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.078 10:59:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.078 ************************************ 00:05:23.078 END TEST alias_rpc 00:05:23.079 ************************************ 00:05:23.079 10:59:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:23.079 10:59:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:23.079 10:59:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.079 10:59:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.079 10:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:23.079 ************************************ 00:05:23.079 START TEST spdkcli_tcp 00:05:23.079 ************************************ 00:05:23.079 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:23.340 * Looking for test storage... 00:05:23.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.340 10:59:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.340 --rc genhtml_branch_coverage=1 00:05:23.340 --rc genhtml_function_coverage=1 00:05:23.340 --rc genhtml_legend=1 00:05:23.340 --rc geninfo_all_blocks=1 00:05:23.340 --rc geninfo_unexecuted_blocks=1 00:05:23.340 00:05:23.340 ' 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.340 --rc genhtml_branch_coverage=1 00:05:23.340 --rc genhtml_function_coverage=1 00:05:23.340 --rc genhtml_legend=1 00:05:23.340 --rc geninfo_all_blocks=1 00:05:23.340 --rc geninfo_unexecuted_blocks=1 00:05:23.340 00:05:23.340 ' 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.340 --rc genhtml_branch_coverage=1 00:05:23.340 --rc genhtml_function_coverage=1 00:05:23.340 --rc genhtml_legend=1 00:05:23.340 --rc geninfo_all_blocks=1 00:05:23.340 --rc geninfo_unexecuted_blocks=1 00:05:23.340 00:05:23.340 ' 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.340 --rc genhtml_branch_coverage=1 00:05:23.340 --rc genhtml_function_coverage=1 00:05:23.340 --rc genhtml_legend=1 00:05:23.340 --rc geninfo_all_blocks=1 00:05:23.340 --rc geninfo_unexecuted_blocks=1 00:05:23.340 00:05:23.340 ' 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3884813 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3884813 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3884813 ']' 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.340 10:59:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.340 10:59:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:23.340 [2024-11-19 10:59:31.601506] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:23.340 [2024-11-19 10:59:31.601577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884813 ] 00:05:23.340 [2024-11-19 10:59:31.684059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.601 [2024-11-19 10:59:31.727330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.601 [2024-11-19 10:59:31.727333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.173 10:59:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.173 10:59:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:24.173 10:59:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3885038 00:05:24.173 10:59:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.173 10:59:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:24.434 [ 00:05:24.434 "bdev_malloc_delete", 00:05:24.434 "bdev_malloc_create", 00:05:24.434 "bdev_null_resize", 00:05:24.434 "bdev_null_delete", 00:05:24.434 "bdev_null_create", 00:05:24.434 "bdev_nvme_cuse_unregister", 00:05:24.434 "bdev_nvme_cuse_register", 00:05:24.434 "bdev_opal_new_user", 00:05:24.434 "bdev_opal_set_lock_state", 00:05:24.434 "bdev_opal_delete", 00:05:24.434 "bdev_opal_get_info", 00:05:24.434 "bdev_opal_create", 00:05:24.434 "bdev_nvme_opal_revert", 00:05:24.434 "bdev_nvme_opal_init", 00:05:24.434 "bdev_nvme_send_cmd", 00:05:24.434 "bdev_nvme_set_keys", 00:05:24.434 "bdev_nvme_get_path_iostat", 00:05:24.434 "bdev_nvme_get_mdns_discovery_info", 00:05:24.434 "bdev_nvme_stop_mdns_discovery", 00:05:24.434 "bdev_nvme_start_mdns_discovery", 00:05:24.434 "bdev_nvme_set_multipath_policy", 00:05:24.434 "bdev_nvme_set_preferred_path", 00:05:24.434 "bdev_nvme_get_io_paths", 00:05:24.434 "bdev_nvme_remove_error_injection", 00:05:24.434 "bdev_nvme_add_error_injection", 00:05:24.434 "bdev_nvme_get_discovery_info", 00:05:24.434 "bdev_nvme_stop_discovery", 00:05:24.434 "bdev_nvme_start_discovery", 00:05:24.435 "bdev_nvme_get_controller_health_info", 00:05:24.435 "bdev_nvme_disable_controller", 00:05:24.435 "bdev_nvme_enable_controller", 00:05:24.435 "bdev_nvme_reset_controller", 00:05:24.435 "bdev_nvme_get_transport_statistics", 00:05:24.435 "bdev_nvme_apply_firmware", 00:05:24.435 "bdev_nvme_detach_controller", 00:05:24.435 "bdev_nvme_get_controllers", 00:05:24.435 "bdev_nvme_attach_controller", 00:05:24.435 "bdev_nvme_set_hotplug", 00:05:24.435 "bdev_nvme_set_options", 00:05:24.435 "bdev_passthru_delete", 00:05:24.435 "bdev_passthru_create", 00:05:24.435 "bdev_lvol_set_parent_bdev", 00:05:24.435 "bdev_lvol_set_parent", 00:05:24.435 "bdev_lvol_check_shallow_copy", 00:05:24.435 "bdev_lvol_start_shallow_copy", 00:05:24.435 "bdev_lvol_grow_lvstore", 00:05:24.435 "bdev_lvol_get_lvols", 00:05:24.435 "bdev_lvol_get_lvstores", 00:05:24.435 "bdev_lvol_delete", 00:05:24.435 "bdev_lvol_set_read_only", 00:05:24.435 "bdev_lvol_resize", 00:05:24.435 "bdev_lvol_decouple_parent", 00:05:24.435 "bdev_lvol_inflate", 00:05:24.435 "bdev_lvol_rename", 00:05:24.435 "bdev_lvol_clone_bdev", 00:05:24.435 "bdev_lvol_clone", 00:05:24.435 "bdev_lvol_snapshot", 00:05:24.435 "bdev_lvol_create", 00:05:24.435 "bdev_lvol_delete_lvstore", 00:05:24.435 "bdev_lvol_rename_lvstore", 00:05:24.435 "bdev_lvol_create_lvstore", 00:05:24.435 "bdev_raid_set_options", 00:05:24.435 "bdev_raid_remove_base_bdev", 00:05:24.435 "bdev_raid_add_base_bdev", 00:05:24.435 "bdev_raid_delete", 00:05:24.435 "bdev_raid_create", 00:05:24.435 "bdev_raid_get_bdevs", 00:05:24.435 "bdev_error_inject_error", 00:05:24.435 "bdev_error_delete", 00:05:24.435 "bdev_error_create", 00:05:24.435 "bdev_split_delete", 00:05:24.435 "bdev_split_create", 00:05:24.435 "bdev_delay_delete", 00:05:24.435 "bdev_delay_create", 00:05:24.435 "bdev_delay_update_latency", 00:05:24.435 "bdev_zone_block_delete", 00:05:24.435 "bdev_zone_block_create", 00:05:24.435 "blobfs_create", 00:05:24.435 "blobfs_detect", 00:05:24.435 "blobfs_set_cache_size", 00:05:24.435 "bdev_aio_delete", 00:05:24.435 "bdev_aio_rescan", 00:05:24.435 "bdev_aio_create", 00:05:24.435 "bdev_ftl_set_property", 00:05:24.435 "bdev_ftl_get_properties", 00:05:24.435 "bdev_ftl_get_stats", 00:05:24.435 "bdev_ftl_unmap", 00:05:24.435 "bdev_ftl_unload", 00:05:24.435 "bdev_ftl_delete", 00:05:24.435 "bdev_ftl_load", 00:05:24.435 "bdev_ftl_create", 00:05:24.435 "bdev_virtio_attach_controller", 00:05:24.435 "bdev_virtio_scsi_get_devices", 00:05:24.435 "bdev_virtio_detach_controller", 00:05:24.435 "bdev_virtio_blk_set_hotplug", 00:05:24.435 "bdev_iscsi_delete", 00:05:24.435 "bdev_iscsi_create", 00:05:24.435 "bdev_iscsi_set_options", 00:05:24.435 "accel_error_inject_error", 00:05:24.435 "ioat_scan_accel_module", 00:05:24.435 "dsa_scan_accel_module", 00:05:24.435 "iaa_scan_accel_module", 00:05:24.435 "vfu_virtio_create_fs_endpoint", 00:05:24.435 "vfu_virtio_create_scsi_endpoint", 00:05:24.435 "vfu_virtio_scsi_remove_target", 00:05:24.435 "vfu_virtio_scsi_add_target", 00:05:24.435 "vfu_virtio_create_blk_endpoint", 00:05:24.435 "vfu_virtio_delete_endpoint", 00:05:24.435 "keyring_file_remove_key", 00:05:24.435 "keyring_file_add_key", 00:05:24.435 "keyring_linux_set_options", 00:05:24.435 "fsdev_aio_delete", 00:05:24.435 "fsdev_aio_create", 00:05:24.435 "iscsi_get_histogram", 00:05:24.435 "iscsi_enable_histogram", 00:05:24.435 "iscsi_set_options", 00:05:24.435 "iscsi_get_auth_groups", 00:05:24.435 "iscsi_auth_group_remove_secret", 00:05:24.435 "iscsi_auth_group_add_secret", 00:05:24.435 "iscsi_delete_auth_group", 00:05:24.435 "iscsi_create_auth_group", 00:05:24.435 "iscsi_set_discovery_auth", 00:05:24.435 "iscsi_get_options", 00:05:24.435 "iscsi_target_node_request_logout", 00:05:24.435 "iscsi_target_node_set_redirect", 00:05:24.435 "iscsi_target_node_set_auth", 00:05:24.435 "iscsi_target_node_add_lun", 00:05:24.435 "iscsi_get_stats", 00:05:24.435 "iscsi_get_connections", 00:05:24.435 "iscsi_portal_group_set_auth", 00:05:24.435 "iscsi_start_portal_group", 00:05:24.435 "iscsi_delete_portal_group", 00:05:24.435 "iscsi_create_portal_group", 00:05:24.435 "iscsi_get_portal_groups", 00:05:24.435 "iscsi_delete_target_node", 00:05:24.435 "iscsi_target_node_remove_pg_ig_maps", 00:05:24.435 "iscsi_target_node_add_pg_ig_maps", 00:05:24.435 "iscsi_create_target_node", 00:05:24.435 "iscsi_get_target_nodes", 00:05:24.435 "iscsi_delete_initiator_group", 00:05:24.435 "iscsi_initiator_group_remove_initiators", 00:05:24.435 "iscsi_initiator_group_add_initiators", 00:05:24.435 "iscsi_create_initiator_group", 00:05:24.435 "iscsi_get_initiator_groups", 00:05:24.435 "nvmf_set_crdt", 00:05:24.435 "nvmf_set_config", 00:05:24.435 "nvmf_set_max_subsystems", 00:05:24.435 "nvmf_stop_mdns_prr", 00:05:24.435 "nvmf_publish_mdns_prr", 00:05:24.435 "nvmf_subsystem_get_listeners", 00:05:24.435 "nvmf_subsystem_get_qpairs", 00:05:24.435 "nvmf_subsystem_get_controllers", 00:05:24.435 "nvmf_get_stats", 00:05:24.435 "nvmf_get_transports", 00:05:24.435 "nvmf_create_transport", 00:05:24.435 "nvmf_get_targets", 00:05:24.435 "nvmf_delete_target", 00:05:24.435 "nvmf_create_target", 00:05:24.435 "nvmf_subsystem_allow_any_host", 00:05:24.435 "nvmf_subsystem_set_keys", 00:05:24.435 "nvmf_subsystem_remove_host", 00:05:24.435 "nvmf_subsystem_add_host", 00:05:24.435 "nvmf_ns_remove_host", 00:05:24.435 "nvmf_ns_add_host", 00:05:24.435 "nvmf_subsystem_remove_ns", 00:05:24.435 "nvmf_subsystem_set_ns_ana_group", 00:05:24.435 "nvmf_subsystem_add_ns", 00:05:24.435 "nvmf_subsystem_listener_set_ana_state", 00:05:24.435 "nvmf_discovery_get_referrals", 00:05:24.435 "nvmf_discovery_remove_referral", 00:05:24.435 "nvmf_discovery_add_referral", 00:05:24.435 "nvmf_subsystem_remove_listener", 00:05:24.435 "nvmf_subsystem_add_listener", 00:05:24.435 "nvmf_delete_subsystem", 00:05:24.435 "nvmf_create_subsystem", 00:05:24.435 "nvmf_get_subsystems", 00:05:24.435 "env_dpdk_get_mem_stats", 00:05:24.435 "nbd_get_disks", 00:05:24.435 "nbd_stop_disk", 00:05:24.435 "nbd_start_disk", 00:05:24.435 "ublk_recover_disk", 00:05:24.435 "ublk_get_disks", 00:05:24.435 "ublk_stop_disk", 00:05:24.435 "ublk_start_disk", 00:05:24.435 "ublk_destroy_target", 00:05:24.435 "ublk_create_target", 00:05:24.435 "virtio_blk_create_transport", 00:05:24.435 "virtio_blk_get_transports", 00:05:24.435 "vhost_controller_set_coalescing", 00:05:24.435 "vhost_get_controllers", 00:05:24.435 "vhost_delete_controller", 00:05:24.435 "vhost_create_blk_controller", 00:05:24.435 "vhost_scsi_controller_remove_target", 00:05:24.435 "vhost_scsi_controller_add_target", 00:05:24.435 "vhost_start_scsi_controller", 00:05:24.435 "vhost_create_scsi_controller", 00:05:24.435 "thread_set_cpumask", 00:05:24.435 "scheduler_set_options", 00:05:24.435 "framework_get_governor", 00:05:24.435 "framework_get_scheduler", 00:05:24.435 "framework_set_scheduler", 00:05:24.435 "framework_get_reactors", 00:05:24.435 "thread_get_io_channels", 00:05:24.435 "thread_get_pollers", 00:05:24.435 "thread_get_stats", 00:05:24.435 "framework_monitor_context_switch", 00:05:24.435 "spdk_kill_instance", 00:05:24.435 "log_enable_timestamps", 00:05:24.435 "log_get_flags", 00:05:24.435 "log_clear_flag", 00:05:24.435 "log_set_flag", 00:05:24.435 "log_get_level", 00:05:24.435 "log_set_level", 00:05:24.435 "log_get_print_level", 00:05:24.435 "log_set_print_level", 00:05:24.435 "framework_enable_cpumask_locks", 00:05:24.435 "framework_disable_cpumask_locks", 00:05:24.435 "framework_wait_init", 00:05:24.435 "framework_start_init", 00:05:24.435 "scsi_get_devices", 00:05:24.435 "bdev_get_histogram", 00:05:24.435 "bdev_enable_histogram", 00:05:24.435 "bdev_set_qos_limit", 00:05:24.435 "bdev_set_qd_sampling_period", 00:05:24.435 "bdev_get_bdevs", 00:05:24.435 "bdev_reset_iostat", 00:05:24.435 "bdev_get_iostat", 00:05:24.435 "bdev_examine", 00:05:24.435 "bdev_wait_for_examine", 00:05:24.435 "bdev_set_options", 00:05:24.435 "accel_get_stats", 00:05:24.435 "accel_set_options", 00:05:24.435 "accel_set_driver", 00:05:24.435 "accel_crypto_key_destroy", 00:05:24.435 "accel_crypto_keys_get", 00:05:24.435 "accel_crypto_key_create", 00:05:24.435 "accel_assign_opc", 00:05:24.435 "accel_get_module_info", 00:05:24.435 "accel_get_opc_assignments", 00:05:24.435 "vmd_rescan", 00:05:24.435 "vmd_remove_device", 00:05:24.435 "vmd_enable", 00:05:24.435 "sock_get_default_impl", 00:05:24.435 "sock_set_default_impl", 00:05:24.435 "sock_impl_set_options", 00:05:24.435 "sock_impl_get_options", 00:05:24.435 "iobuf_get_stats", 00:05:24.435 "iobuf_set_options", 00:05:24.435 "keyring_get_keys", 00:05:24.435 "vfu_tgt_set_base_path", 00:05:24.435 "framework_get_pci_devices", 00:05:24.435 "framework_get_config", 00:05:24.435 "framework_get_subsystems", 00:05:24.435 "fsdev_set_opts", 00:05:24.435 "fsdev_get_opts", 00:05:24.435 "trace_get_info", 00:05:24.435 "trace_get_tpoint_group_mask", 00:05:24.435 "trace_disable_tpoint_group", 00:05:24.435 "trace_enable_tpoint_group", 00:05:24.435 "trace_clear_tpoint_mask", 00:05:24.435 "trace_set_tpoint_mask", 00:05:24.435 "notify_get_notifications", 00:05:24.435 "notify_get_types", 00:05:24.435 "spdk_get_version", 00:05:24.435 "rpc_get_methods" 00:05:24.435 ] 00:05:24.435 10:59:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.435 10:59:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:24.435 10:59:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3884813 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3884813 ']' 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3884813 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3884813 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3884813' 00:05:24.435 killing process with pid 3884813 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3884813 00:05:24.435 10:59:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3884813 00:05:24.697 00:05:24.697 real 0m1.505s 00:05:24.697 user 0m2.749s 00:05:24.697 sys 0m0.444s 00:05:24.697 10:59:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.697 10:59:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.697 ************************************ 00:05:24.697 END TEST spdkcli_tcp 00:05:24.697 ************************************ 00:05:24.697 10:59:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.697 10:59:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.697 10:59:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.697 10:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:24.697 ************************************ 00:05:24.697 START TEST dpdk_mem_utility 00:05:24.697 ************************************ 00:05:24.697 10:59:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.697 * Looking for test storage... 00:05:24.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:24.697 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.697 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.697 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.959 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:24.959 10:59:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.960 10:59:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:24.960 10:59:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:24.960 10:59:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.960 10:59:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:24.960 10:59:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.960 10:59:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.960 10:59:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.960 10:59:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.960 --rc genhtml_branch_coverage=1 00:05:24.960 --rc genhtml_function_coverage=1 00:05:24.960 --rc genhtml_legend=1 00:05:24.960 --rc geninfo_all_blocks=1 00:05:24.960 --rc geninfo_unexecuted_blocks=1 00:05:24.960 00:05:24.960 ' 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.960 --rc genhtml_branch_coverage=1 00:05:24.960 --rc genhtml_function_coverage=1 00:05:24.960 --rc genhtml_legend=1 00:05:24.960 --rc geninfo_all_blocks=1 00:05:24.960 --rc geninfo_unexecuted_blocks=1 00:05:24.960 00:05:24.960 ' 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.960 --rc genhtml_branch_coverage=1 00:05:24.960 --rc genhtml_function_coverage=1 00:05:24.960 --rc genhtml_legend=1 00:05:24.960 --rc geninfo_all_blocks=1 00:05:24.960 --rc geninfo_unexecuted_blocks=1 00:05:24.960 00:05:24.960 ' 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.960 --rc genhtml_branch_coverage=1 00:05:24.960 --rc genhtml_function_coverage=1 00:05:24.960 --rc genhtml_legend=1 00:05:24.960 --rc geninfo_all_blocks=1 00:05:24.960 --rc geninfo_unexecuted_blocks=1 00:05:24.960 00:05:24.960 ' 00:05:24.960 10:59:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:24.960 10:59:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3885228 00:05:24.960 10:59:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3885228 00:05:24.960 10:59:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3885228 ']' 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.960 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.960 [2024-11-19 10:59:33.199912] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:24.960 [2024-11-19 10:59:33.199985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885228 ] 00:05:24.960 [2024-11-19 10:59:33.282085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.221 [2024-11-19 10:59:33.325017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.792 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.792 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:25.792 10:59:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.792 10:59:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.792 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.792 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.792 { 00:05:25.792 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.792 } 00:05:25.792 10:59:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.792 10:59:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:25.792 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:25.792 1 heaps totaling size 810.000000 MiB 00:05:25.792 size: 810.000000 MiB heap id: 0 00:05:25.792 end heaps---------- 00:05:25.792 9 mempools totaling size 595.772034 MiB 00:05:25.792 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.792 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.792 size: 92.545471 MiB name: bdev_io_3885228 00:05:25.792 size: 50.003479 MiB name: msgpool_3885228 00:05:25.792 size: 36.509338 MiB name: fsdev_io_3885228 00:05:25.792 size: 21.763794 MiB name: PDU_Pool 00:05:25.792 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.792 size: 4.133484 MiB name: evtpool_3885228 00:05:25.792 size: 0.026123 MiB name: Session_Pool 00:05:25.792 end mempools------- 00:05:25.792 6 memzones totaling size 4.142822 MiB 00:05:25.792 size: 1.000366 MiB name: RG_ring_0_3885228 00:05:25.792 size: 1.000366 MiB name: RG_ring_1_3885228 00:05:25.792 size: 1.000366 MiB name: RG_ring_4_3885228 00:05:25.792 size: 1.000366 MiB name: RG_ring_5_3885228 00:05:25.792 size: 0.125366 MiB name: RG_ring_2_3885228 00:05:25.792 size: 0.015991 MiB name: RG_ring_3_3885228 00:05:25.792 end memzones------- 00:05:25.792 10:59:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.792 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:25.792 list of free elements. size: 10.862488 MiB 00:05:25.792 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:25.792 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:25.792 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:25.792 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:25.792 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:25.792 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:25.792 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:25.792 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:25.792 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:25.792 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:25.792 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:25.792 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:25.792 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:25.792 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:25.792 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:25.792 list of standard malloc elements. size: 199.218628 MiB 00:05:25.792 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:25.792 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:25.792 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:25.792 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:25.792 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:25.792 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:25.792 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:25.792 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.792 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:25.792 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:25.792 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:25.792 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:25.792 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:25.792 list of memzone associated elements. size: 599.918884 MiB 00:05:25.792 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:25.792 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.792 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:25.792 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.792 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:25.792 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3885228_0 00:05:25.793 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:25.793 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3885228_0 00:05:25.793 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:25.793 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3885228_0 00:05:25.793 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:25.793 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.793 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:25.793 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.793 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:25.793 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3885228_0 00:05:25.793 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:25.793 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3885228 00:05:25.793 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:25.793 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3885228 00:05:25.793 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:25.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.793 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:25.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.793 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:25.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.793 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:25.793 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.793 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:25.793 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3885228 00:05:25.793 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:25.793 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3885228 00:05:25.793 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:25.793 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3885228 00:05:25.793 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:25.793 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3885228 00:05:25.793 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:25.793 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3885228 00:05:25.793 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:25.793 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3885228 00:05:25.793 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:25.793 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.793 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:25.793 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.793 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:25.793 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.793 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:25.793 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3885228 00:05:25.793 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:25.793 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3885228 00:05:25.793 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:25.793 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.793 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:25.793 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.793 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:25.793 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3885228 00:05:25.793 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:25.793 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.793 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:25.793 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3885228 00:05:25.793 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:25.793 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3885228 00:05:25.793 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:25.793 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3885228 00:05:25.793 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:25.793 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.793 10:59:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.793 10:59:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3885228 00:05:25.793 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3885228 ']' 00:05:25.793 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3885228 00:05:25.793 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:25.793 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.793 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885228 00:05:26.054 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.054 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.054 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885228' 00:05:26.054 killing process with pid 3885228 00:05:26.054 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3885228 00:05:26.054 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3885228 00:05:26.054 00:05:26.054 real 0m1.422s 00:05:26.054 user 0m1.501s 00:05:26.054 sys 0m0.416s 00:05:26.054 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.054 10:59:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.054 ************************************ 00:05:26.054 END TEST dpdk_mem_utility 00:05:26.054 ************************************ 00:05:26.055 10:59:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:26.055 10:59:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.055 10:59:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.055 10:59:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.316 ************************************ 00:05:26.316 START TEST event 00:05:26.316 ************************************ 00:05:26.316 10:59:34 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:26.316 * Looking for test storage... 00:05:26.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.317 10:59:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.317 10:59:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.317 10:59:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.317 10:59:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.317 10:59:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.317 10:59:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.317 10:59:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.317 10:59:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.317 10:59:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.317 10:59:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.317 10:59:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.317 10:59:34 event -- scripts/common.sh@344 -- # case "$op" in 00:05:26.317 10:59:34 event -- scripts/common.sh@345 -- # : 1 00:05:26.317 10:59:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.317 10:59:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.317 10:59:34 event -- scripts/common.sh@365 -- # decimal 1 00:05:26.317 10:59:34 event -- scripts/common.sh@353 -- # local d=1 00:05:26.317 10:59:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.317 10:59:34 event -- scripts/common.sh@355 -- # echo 1 00:05:26.317 10:59:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.317 10:59:34 event -- scripts/common.sh@366 -- # decimal 2 00:05:26.317 10:59:34 event -- scripts/common.sh@353 -- # local d=2 00:05:26.317 10:59:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.317 10:59:34 event -- scripts/common.sh@355 -- # echo 2 00:05:26.317 10:59:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.317 10:59:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.317 10:59:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.317 10:59:34 event -- scripts/common.sh@368 -- # return 0 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.317 --rc genhtml_branch_coverage=1 00:05:26.317 --rc genhtml_function_coverage=1 00:05:26.317 --rc genhtml_legend=1 00:05:26.317 --rc geninfo_all_blocks=1 00:05:26.317 --rc geninfo_unexecuted_blocks=1 00:05:26.317 00:05:26.317 ' 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.317 --rc genhtml_branch_coverage=1 00:05:26.317 --rc genhtml_function_coverage=1 00:05:26.317 --rc genhtml_legend=1 00:05:26.317 --rc geninfo_all_blocks=1 00:05:26.317 --rc geninfo_unexecuted_blocks=1 00:05:26.317 00:05:26.317 ' 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.317 --rc genhtml_branch_coverage=1 00:05:26.317 --rc genhtml_function_coverage=1 00:05:26.317 --rc genhtml_legend=1 00:05:26.317 --rc geninfo_all_blocks=1 00:05:26.317 --rc geninfo_unexecuted_blocks=1 00:05:26.317 00:05:26.317 ' 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.317 --rc genhtml_branch_coverage=1 00:05:26.317 --rc genhtml_function_coverage=1 00:05:26.317 --rc genhtml_legend=1 00:05:26.317 --rc geninfo_all_blocks=1 00:05:26.317 --rc geninfo_unexecuted_blocks=1 00:05:26.317 00:05:26.317 ' 00:05:26.317 10:59:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:26.317 10:59:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.317 10:59:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:26.317 10:59:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.317 10:59:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.578 ************************************ 00:05:26.578 START TEST event_perf 00:05:26.578 ************************************ 00:05:26.578 10:59:34 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.578 Running I/O for 1 seconds...[2024-11-19 10:59:34.697185] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:26.578 [2024-11-19 10:59:34.697289] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885626 ] 00:05:26.578 [2024-11-19 10:59:34.780462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.579 [2024-11-19 10:59:34.820170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.579 [2024-11-19 10:59:34.820270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.579 [2024-11-19 10:59:34.820425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.579 Running I/O for 1 seconds...[2024-11-19 10:59:34.820426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.521 00:05:27.521 lcore 0: 178413 00:05:27.521 lcore 1: 178412 00:05:27.521 lcore 2: 178409 00:05:27.521 lcore 3: 178412 00:05:27.521 done. 00:05:27.521 00:05:27.521 real 0m1.179s 00:05:27.521 user 0m4.095s 00:05:27.521 sys 0m0.080s 00:05:27.521 10:59:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.521 10:59:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.521 ************************************ 00:05:27.521 END TEST event_perf 00:05:27.521 ************************************ 00:05:27.781 10:59:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.781 10:59:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:27.781 10:59:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.781 10:59:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.781 ************************************ 00:05:27.781 START TEST event_reactor 00:05:27.781 ************************************ 00:05:27.781 10:59:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.781 [2024-11-19 10:59:35.955783] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:27.781 [2024-11-19 10:59:35.955882] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885986 ] 00:05:27.781 [2024-11-19 10:59:36.038148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.781 [2024-11-19 10:59:36.073587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.167 test_start 00:05:29.167 oneshot 00:05:29.167 tick 100 00:05:29.167 tick 100 00:05:29.167 tick 250 00:05:29.167 tick 100 00:05:29.167 tick 100 00:05:29.167 tick 250 00:05:29.167 tick 100 00:05:29.167 tick 500 00:05:29.167 tick 100 00:05:29.167 tick 100 00:05:29.167 tick 250 00:05:29.167 tick 100 00:05:29.167 tick 100 00:05:29.167 test_end 00:05:29.167 00:05:29.167 real 0m1.171s 00:05:29.167 user 0m1.106s 00:05:29.167 sys 0m0.062s 00:05:29.167 10:59:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.167 10:59:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.167 ************************************ 00:05:29.167 END TEST event_reactor 00:05:29.167 ************************************ 00:05:29.167 10:59:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.167 10:59:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:29.167 10:59:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.167 10:59:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.167 ************************************ 00:05:29.167 START TEST event_reactor_perf 00:05:29.167 ************************************ 00:05:29.167 10:59:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.167 [2024-11-19 10:59:37.206066] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:29.167 [2024-11-19 10:59:37.206157] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886130 ] 00:05:29.167 [2024-11-19 10:59:37.287608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.167 [2024-11-19 10:59:37.322697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.110 test_start 00:05:30.110 test_end 00:05:30.110 Performance: 369327 events per second 00:05:30.110 00:05:30.110 real 0m1.171s 00:05:30.110 user 0m1.094s 00:05:30.110 sys 0m0.074s 00:05:30.110 10:59:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.110 10:59:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.110 ************************************ 00:05:30.110 END TEST event_reactor_perf 00:05:30.110 ************************************ 00:05:30.110 10:59:38 event -- event/event.sh@49 -- # uname -s 00:05:30.110 10:59:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.110 10:59:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.110 10:59:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.110 10:59:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.110 10:59:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.110 ************************************ 00:05:30.110 START TEST event_scheduler 00:05:30.110 ************************************ 00:05:30.110 10:59:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.371 * Looking for test storage... 00:05:30.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:30.371 10:59:38 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.371 10:59:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.371 10:59:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.371 10:59:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.371 10:59:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:30.371 10:59:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.371 10:59:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.371 --rc genhtml_branch_coverage=1 00:05:30.371 --rc genhtml_function_coverage=1 00:05:30.371 --rc genhtml_legend=1 00:05:30.371 --rc geninfo_all_blocks=1 00:05:30.371 --rc geninfo_unexecuted_blocks=1 00:05:30.371 00:05:30.371 ' 00:05:30.371 10:59:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.371 --rc genhtml_branch_coverage=1 00:05:30.371 --rc genhtml_function_coverage=1 00:05:30.371 --rc genhtml_legend=1 00:05:30.371 --rc geninfo_all_blocks=1 00:05:30.371 --rc geninfo_unexecuted_blocks=1 00:05:30.371 00:05:30.371 ' 00:05:30.371 10:59:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.371 --rc genhtml_branch_coverage=1 00:05:30.372 --rc genhtml_function_coverage=1 00:05:30.372 --rc genhtml_legend=1 00:05:30.372 --rc geninfo_all_blocks=1 00:05:30.372 --rc geninfo_unexecuted_blocks=1 00:05:30.372 00:05:30.372 ' 00:05:30.372 10:59:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.372 --rc genhtml_branch_coverage=1 00:05:30.372 --rc genhtml_function_coverage=1 00:05:30.372 --rc genhtml_legend=1 00:05:30.372 --rc geninfo_all_blocks=1 00:05:30.372 --rc geninfo_unexecuted_blocks=1 00:05:30.372 00:05:30.372 ' 00:05:30.372 10:59:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.372 10:59:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3886426 00:05:30.372 10:59:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.372 10:59:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3886426 00:05:30.372 10:59:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.372 10:59:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3886426 ']' 00:05:30.372 10:59:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.372 10:59:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.372 10:59:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.372 10:59:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.372 10:59:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.372 [2024-11-19 10:59:38.692024] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:30.372 [2024-11-19 10:59:38.692099] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886426 ] 00:05:30.634 [2024-11-19 10:59:38.766364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.634 [2024-11-19 10:59:38.804276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.634 [2024-11-19 10:59:38.804434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.634 [2024-11-19 10:59:38.804585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.634 [2024-11-19 10:59:38.804587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:30.634 10:59:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.634 [2024-11-19 10:59:38.849031] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:30.634 [2024-11-19 10:59:38.849045] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.634 [2024-11-19 10:59:38.849053] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.634 [2024-11-19 10:59:38.849057] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.634 [2024-11-19 10:59:38.849061] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.634 10:59:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.634 [2024-11-19 10:59:38.905459] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.634 10:59:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.634 10:59:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.634 ************************************ 00:05:30.634 START TEST scheduler_create_thread 00:05:30.634 ************************************ 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.634 2 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.634 3 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.634 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.896 4 00:05:30.896 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.896 10:59:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.896 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.896 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.896 5 00:05:30.896 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.896 10:59:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.896 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.896 10:59:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.896 6 00:05:30.896 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.896 10:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.896 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.896 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.896 7 00:05:30.896 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.896 10:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.896 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.896 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.896 8 00:05:30.897 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.897 10:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.897 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.897 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.897 9 00:05:30.897 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.897 10:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.897 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.897 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.157 10 00:05:31.157 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.157 10:59:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.157 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.157 10:59:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.545 10:59:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.545 10:59:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:32.545 10:59:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:32.545 10:59:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.545 10:59:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.490 10:59:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.490 10:59:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:33.491 10:59:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.491 10:59:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.063 10:59:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.063 10:59:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:34.063 10:59:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:34.063 10:59:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.063 10:59:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.007 10:59:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.007 00:05:35.007 real 0m4.226s 00:05:35.007 user 0m0.023s 00:05:35.007 sys 0m0.008s 00:05:35.007 10:59:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.007 10:59:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.007 ************************************ 00:05:35.007 END TEST scheduler_create_thread 00:05:35.007 ************************************ 00:05:35.007 10:59:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:35.007 10:59:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3886426 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3886426 ']' 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3886426 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3886426 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3886426' 00:05:35.007 killing process with pid 3886426 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3886426 00:05:35.007 10:59:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3886426 00:05:35.268 [2024-11-19 10:59:43.450705] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:35.268 00:05:35.268 real 0m5.169s 00:05:35.268 user 0m10.234s 00:05:35.268 sys 0m0.373s 00:05:35.268 10:59:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.268 10:59:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.268 ************************************ 00:05:35.268 END TEST event_scheduler 00:05:35.268 ************************************ 00:05:35.530 10:59:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:35.531 10:59:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:35.531 10:59:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.531 10:59:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.531 10:59:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.531 ************************************ 00:05:35.531 START TEST app_repeat 00:05:35.531 ************************************ 00:05:35.531 10:59:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3887482 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3887482' 00:05:35.531 Process app_repeat pid: 3887482 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:35.531 spdk_app_start Round 0 00:05:35.531 10:59:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3887482 /var/tmp/spdk-nbd.sock 00:05:35.531 10:59:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3887482 ']' 00:05:35.531 10:59:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.531 10:59:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.531 10:59:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.531 10:59:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.531 10:59:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.531 [2024-11-19 10:59:43.729617] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:35.531 [2024-11-19 10:59:43.729687] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887482 ] 00:05:35.531 [2024-11-19 10:59:43.811714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.531 [2024-11-19 10:59:43.849609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.531 [2024-11-19 10:59:43.849611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.792 10:59:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.792 10:59:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.792 10:59:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.792 Malloc0 00:05:35.792 10:59:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.057 Malloc1 00:05:36.057 10:59:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.057 10:59:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.318 /dev/nbd0 00:05:36.318 10:59:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.318 10:59:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.318 1+0 records in 00:05:36.318 1+0 records out 00:05:36.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315508 s, 13.0 MB/s 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.318 10:59:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.318 10:59:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.318 10:59:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.318 10:59:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.581 /dev/nbd1 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.581 1+0 records in 00:05:36.581 1+0 records out 00:05:36.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022008 s, 18.6 MB/s 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.581 10:59:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.581 { 00:05:36.581 "nbd_device": "/dev/nbd0", 00:05:36.581 "bdev_name": "Malloc0" 00:05:36.581 }, 00:05:36.581 { 00:05:36.581 "nbd_device": "/dev/nbd1", 00:05:36.581 "bdev_name": "Malloc1" 00:05:36.581 } 00:05:36.581 ]' 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.581 { 00:05:36.581 "nbd_device": "/dev/nbd0", 00:05:36.581 "bdev_name": "Malloc0" 00:05:36.581 }, 00:05:36.581 { 00:05:36.581 "nbd_device": "/dev/nbd1", 00:05:36.581 "bdev_name": "Malloc1" 00:05:36.581 } 00:05:36.581 ]' 00:05:36.581 10:59:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.843 /dev/nbd1' 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.843 /dev/nbd1' 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.843 256+0 records in 00:05:36.843 256+0 records out 00:05:36.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121536 s, 86.3 MB/s 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.843 256+0 records in 00:05:36.843 256+0 records out 00:05:36.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164222 s, 63.9 MB/s 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.843 10:59:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.843 256+0 records in 00:05:36.843 256+0 records out 00:05:36.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188576 s, 55.6 MB/s 00:05:36.844 10:59:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.844 10:59:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.844 10:59:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.844 10:59:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.104 10:59:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.366 10:59:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.366 10:59:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.626 10:59:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.626 [2024-11-19 10:59:45.915436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.626 [2024-11-19 10:59:45.951849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.626 [2024-11-19 10:59:45.951851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.887 [2024-11-19 10:59:45.983710] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.887 [2024-11-19 10:59:45.983748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.194 10:59:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.194 10:59:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:41.194 spdk_app_start Round 1 00:05:41.194 10:59:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3887482 /var/tmp/spdk-nbd.sock 00:05:41.194 10:59:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3887482 ']' 00:05:41.194 10:59:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.194 10:59:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.194 10:59:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.194 10:59:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.194 10:59:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.194 10:59:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.194 10:59:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:41.194 10:59:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.194 Malloc0 00:05:41.194 10:59:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.194 Malloc1 00:05:41.194 10:59:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.195 /dev/nbd0 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.195 10:59:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.195 1+0 records in 00:05:41.195 1+0 records out 00:05:41.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278919 s, 14.7 MB/s 00:05:41.195 10:59:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.456 /dev/nbd1 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.456 1+0 records in 00:05:41.456 1+0 records out 00:05:41.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249444 s, 16.4 MB/s 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.456 10:59:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.456 10:59:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.718 { 00:05:41.718 "nbd_device": "/dev/nbd0", 00:05:41.718 "bdev_name": "Malloc0" 00:05:41.718 }, 00:05:41.718 { 00:05:41.718 "nbd_device": "/dev/nbd1", 00:05:41.718 "bdev_name": "Malloc1" 00:05:41.718 } 00:05:41.718 ]' 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.718 { 00:05:41.718 "nbd_device": "/dev/nbd0", 00:05:41.718 "bdev_name": "Malloc0" 00:05:41.718 }, 00:05:41.718 { 00:05:41.718 "nbd_device": "/dev/nbd1", 00:05:41.718 "bdev_name": "Malloc1" 00:05:41.718 } 00:05:41.718 ]' 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.718 /dev/nbd1' 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.718 /dev/nbd1' 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.718 10:59:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.719 10:59:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.719 256+0 records in 00:05:41.719 256+0 records out 00:05:41.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126088 s, 83.2 MB/s 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.719 256+0 records in 00:05:41.719 256+0 records out 00:05:41.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179932 s, 58.3 MB/s 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.719 256+0 records in 00:05:41.719 256+0 records out 00:05:41.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196445 s, 53.4 MB/s 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.719 10:59:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.980 10:59:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.980 10:59:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.980 10:59:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.980 10:59:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.981 10:59:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.242 10:59:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.504 10:59:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.504 10:59:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.766 10:59:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.766 [2024-11-19 10:59:50.981977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.766 [2024-11-19 10:59:51.018562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.766 [2024-11-19 10:59:51.018563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.766 [2024-11-19 10:59:51.051051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.766 [2024-11-19 10:59:51.051089] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.068 10:59:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.068 10:59:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.068 spdk_app_start Round 2 00:05:46.068 10:59:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3887482 /var/tmp/spdk-nbd.sock 00:05:46.068 10:59:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3887482 ']' 00:05:46.068 10:59:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.068 10:59:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.068 10:59:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.068 10:59:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.068 10:59:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.068 10:59:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.068 10:59:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.068 10:59:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.068 Malloc0 00:05:46.068 10:59:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.068 Malloc1 00:05:46.068 10:59:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.068 10:59:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.068 10:59:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.068 10:59:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.068 10:59:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.068 10:59:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.068 10:59:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.069 10:59:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.330 /dev/nbd0 00:05:46.330 10:59:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.330 10:59:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.330 1+0 records in 00:05:46.330 1+0 records out 00:05:46.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027359 s, 15.0 MB/s 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.330 10:59:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.330 10:59:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.330 10:59:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.330 10:59:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.591 /dev/nbd1 00:05:46.591 10:59:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.591 10:59:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.591 1+0 records in 00:05:46.591 1+0 records out 00:05:46.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000138275 s, 29.6 MB/s 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.591 10:59:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.591 10:59:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.591 10:59:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.591 10:59:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.591 10:59:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.591 10:59:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.852 { 00:05:46.852 "nbd_device": "/dev/nbd0", 00:05:46.852 "bdev_name": "Malloc0" 00:05:46.852 }, 00:05:46.852 { 00:05:46.852 "nbd_device": "/dev/nbd1", 00:05:46.852 "bdev_name": "Malloc1" 00:05:46.852 } 00:05:46.852 ]' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.852 { 00:05:46.852 "nbd_device": "/dev/nbd0", 00:05:46.852 "bdev_name": "Malloc0" 00:05:46.852 }, 00:05:46.852 { 00:05:46.852 "nbd_device": "/dev/nbd1", 00:05:46.852 "bdev_name": "Malloc1" 00:05:46.852 } 00:05:46.852 ]' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.852 /dev/nbd1' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.852 /dev/nbd1' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.852 256+0 records in 00:05:46.852 256+0 records out 00:05:46.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127418 s, 82.3 MB/s 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.852 256+0 records in 00:05:46.852 256+0 records out 00:05:46.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169649 s, 61.8 MB/s 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.852 256+0 records in 00:05:46.852 256+0 records out 00:05:46.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185985 s, 56.4 MB/s 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.852 10:59:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.113 10:59:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.374 10:59:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.374 10:59:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.374 10:59:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.374 10:59:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.375 10:59:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.635 10:59:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.635 10:59:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.635 10:59:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.896 [2024-11-19 10:59:56.049363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.896 [2024-11-19 10:59:56.085874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.896 [2024-11-19 10:59:56.085883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.896 [2024-11-19 10:59:56.117924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.896 [2024-11-19 10:59:56.117967] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.201 10:59:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3887482 /var/tmp/spdk-nbd.sock 00:05:51.201 10:59:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3887482 ']' 00:05:51.201 10:59:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.201 10:59:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.201 10:59:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.201 10:59:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.201 10:59:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:51.201 10:59:59 event.app_repeat -- event/event.sh@39 -- # killprocess 3887482 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3887482 ']' 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3887482 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3887482 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3887482' 00:05:51.201 killing process with pid 3887482 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3887482 00:05:51.201 10:59:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3887482 00:05:51.201 spdk_app_start is called in Round 0. 00:05:51.201 Shutdown signal received, stop current app iteration 00:05:51.201 Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 reinitialization... 00:05:51.201 spdk_app_start is called in Round 1. 00:05:51.201 Shutdown signal received, stop current app iteration 00:05:51.201 Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 reinitialization... 00:05:51.201 spdk_app_start is called in Round 2. 00:05:51.201 Shutdown signal received, stop current app iteration 00:05:51.201 Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 reinitialization... 00:05:51.201 spdk_app_start is called in Round 3. 00:05:51.201 Shutdown signal received, stop current app iteration 00:05:51.201 10:59:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.202 10:59:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.202 00:05:51.202 real 0m15.577s 00:05:51.202 user 0m33.833s 00:05:51.202 sys 0m2.362s 00:05:51.202 10:59:59 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.202 10:59:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.202 ************************************ 00:05:51.202 END TEST app_repeat 00:05:51.202 ************************************ 00:05:51.202 10:59:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.202 10:59:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.202 10:59:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.202 10:59:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.202 10:59:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.202 ************************************ 00:05:51.202 START TEST cpu_locks 00:05:51.202 ************************************ 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.202 * Looking for test storage... 00:05:51.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.202 10:59:59 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.202 --rc genhtml_branch_coverage=1 00:05:51.202 --rc genhtml_function_coverage=1 00:05:51.202 --rc genhtml_legend=1 00:05:51.202 --rc geninfo_all_blocks=1 00:05:51.202 --rc geninfo_unexecuted_blocks=1 00:05:51.202 00:05:51.202 ' 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.202 --rc genhtml_branch_coverage=1 00:05:51.202 --rc genhtml_function_coverage=1 00:05:51.202 --rc genhtml_legend=1 00:05:51.202 --rc geninfo_all_blocks=1 00:05:51.202 --rc geninfo_unexecuted_blocks=1 00:05:51.202 00:05:51.202 ' 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.202 --rc genhtml_branch_coverage=1 00:05:51.202 --rc genhtml_function_coverage=1 00:05:51.202 --rc genhtml_legend=1 00:05:51.202 --rc geninfo_all_blocks=1 00:05:51.202 --rc geninfo_unexecuted_blocks=1 00:05:51.202 00:05:51.202 ' 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.202 --rc genhtml_branch_coverage=1 00:05:51.202 --rc genhtml_function_coverage=1 00:05:51.202 --rc genhtml_legend=1 00:05:51.202 --rc geninfo_all_blocks=1 00:05:51.202 --rc geninfo_unexecuted_blocks=1 00:05:51.202 00:05:51.202 ' 00:05:51.202 10:59:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.202 10:59:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.202 10:59:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.202 10:59:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.202 10:59:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.464 ************************************ 00:05:51.464 START TEST default_locks 00:05:51.464 ************************************ 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3891059 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3891059 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3891059 ']' 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.464 10:59:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.464 [2024-11-19 10:59:59.648091] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:51.464 [2024-11-19 10:59:59.648146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891059 ] 00:05:51.464 [2024-11-19 10:59:59.727103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.464 [2024-11-19 10:59:59.763300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.406 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.406 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:52.406 11:00:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3891059 00:05:52.406 11:00:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3891059 00:05:52.406 11:00:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.667 lslocks: write error 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3891059 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3891059 ']' 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3891059 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891059 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891059' 00:05:52.667 killing process with pid 3891059 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3891059 00:05:52.667 11:00:00 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3891059 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3891059 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3891059 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3891059 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3891059 ']' 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3891059) - No such process 00:05:52.929 ERROR: process (pid: 3891059) is no longer running 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.929 00:05:52.929 real 0m1.571s 00:05:52.929 user 0m1.696s 00:05:52.929 sys 0m0.532s 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.929 11:00:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.929 ************************************ 00:05:52.929 END TEST default_locks 00:05:52.929 ************************************ 00:05:52.929 11:00:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:52.929 11:00:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.929 11:00:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.929 11:00:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.929 ************************************ 00:05:52.929 START TEST default_locks_via_rpc 00:05:52.929 ************************************ 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3891482 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3891482 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3891482 ']' 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.929 11:00:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.189 [2024-11-19 11:00:01.296984] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:53.189 [2024-11-19 11:00:01.297039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891482 ] 00:05:53.189 [2024-11-19 11:00:01.376294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.189 [2024-11-19 11:00:01.414102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3891482 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3891482 00:05:53.761 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3891482 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3891482 ']' 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3891482 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891482 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891482' 00:05:54.331 killing process with pid 3891482 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3891482 00:05:54.331 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3891482 00:05:54.591 00:05:54.591 real 0m1.597s 00:05:54.591 user 0m1.704s 00:05:54.591 sys 0m0.564s 00:05:54.591 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.591 11:00:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.591 ************************************ 00:05:54.591 END TEST default_locks_via_rpc 00:05:54.591 ************************************ 00:05:54.591 11:00:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.591 11:00:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.591 11:00:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.591 11:00:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.591 ************************************ 00:05:54.591 START TEST non_locking_app_on_locked_coremask 00:05:54.591 ************************************ 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3891897 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3891897 /var/tmp/spdk.sock 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3891897 ']' 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.591 11:00:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.852 [2024-11-19 11:00:02.967642] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:54.852 [2024-11-19 11:00:02.967691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891897 ] 00:05:54.852 [2024-11-19 11:00:03.045120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.852 [2024-11-19 11:00:03.080775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3891910 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3891910 /var/tmp/spdk2.sock 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3891910 ']' 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.424 11:00:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.684 [2024-11-19 11:00:03.813462] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:55.684 [2024-11-19 11:00:03.813515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891910 ] 00:05:55.684 [2024-11-19 11:00:03.939032] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.684 [2024-11-19 11:00:03.939065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.684 [2024-11-19 11:00:04.011795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.256 11:00:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.256 11:00:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.256 11:00:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3891897 00:05:56.256 11:00:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3891897 00:05:56.256 11:00:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.828 lslocks: write error 00:05:56.829 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3891897 00:05:56.829 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3891897 ']' 00:05:56.829 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3891897 00:05:56.829 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.829 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.829 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891897 00:05:57.090 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.090 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.090 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891897' 00:05:57.090 killing process with pid 3891897 00:05:57.090 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3891897 00:05:57.090 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3891897 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3891910 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3891910 ']' 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3891910 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891910 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891910' 00:05:57.351 killing process with pid 3891910 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3891910 00:05:57.351 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3891910 00:05:57.611 00:05:57.611 real 0m2.979s 00:05:57.611 user 0m3.315s 00:05:57.611 sys 0m0.906s 00:05:57.611 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.611 11:00:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.611 ************************************ 00:05:57.611 END TEST non_locking_app_on_locked_coremask 00:05:57.611 ************************************ 00:05:57.611 11:00:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:57.611 11:00:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.611 11:00:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.611 11:00:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.872 ************************************ 00:05:57.872 START TEST locking_app_on_unlocked_coremask 00:05:57.872 ************************************ 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3892443 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3892443 /var/tmp/spdk.sock 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3892443 ']' 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.872 11:00:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.872 [2024-11-19 11:00:06.023956] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:57.872 [2024-11-19 11:00:06.024005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892443 ] 00:05:57.873 [2024-11-19 11:00:06.102186] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.873 [2024-11-19 11:00:06.102215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.873 [2024-11-19 11:00:06.137968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3892622 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3892622 /var/tmp/spdk2.sock 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3892622 ']' 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.816 11:00:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.816 [2024-11-19 11:00:06.875444] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:05:58.816 [2024-11-19 11:00:06.875497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892622 ] 00:05:58.816 [2024-11-19 11:00:06.999810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.816 [2024-11-19 11:00:07.072370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.388 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.388 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.388 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3892622 00:05:59.388 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3892622 00:05:59.388 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.648 lslocks: write error 00:05:59.648 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3892443 00:05:59.648 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3892443 ']' 00:05:59.648 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3892443 00:05:59.648 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.648 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.648 11:00:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3892443 00:05:59.908 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.908 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.908 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3892443' 00:05:59.908 killing process with pid 3892443 00:05:59.908 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3892443 00:05:59.908 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3892443 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3892622 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3892622 ']' 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3892622 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3892622 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3892622' 00:06:00.169 killing process with pid 3892622 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3892622 00:06:00.169 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3892622 00:06:00.430 00:06:00.430 real 0m2.736s 00:06:00.430 user 0m3.059s 00:06:00.430 sys 0m0.796s 00:06:00.430 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.430 11:00:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.430 ************************************ 00:06:00.430 END TEST locking_app_on_unlocked_coremask 00:06:00.430 ************************************ 00:06:00.430 11:00:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:00.430 11:00:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.430 11:00:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.430 11:00:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.430 ************************************ 00:06:00.430 START TEST locking_app_on_locked_coremask 00:06:00.430 ************************************ 00:06:00.430 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:00.430 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3893086 00:06:00.430 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3893086 /var/tmp/spdk.sock 00:06:00.430 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.735 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3893086 ']' 00:06:00.735 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.735 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.735 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.735 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.735 11:00:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.735 [2024-11-19 11:00:08.840436] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:00.735 [2024-11-19 11:00:08.840490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893086 ] 00:06:00.735 [2024-11-19 11:00:08.922347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.735 [2024-11-19 11:00:08.961250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3893589 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3893589 /var/tmp/spdk2.sock 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3893589 /var/tmp/spdk2.sock 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3893589 /var/tmp/spdk2.sock 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3893589 ']' 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.459 11:00:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.459 [2024-11-19 11:00:09.704128] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:01.459 [2024-11-19 11:00:09.704182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893589 ] 00:06:01.738 [2024-11-19 11:00:09.829994] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3893086 has claimed it. 00:06:01.738 [2024-11-19 11:00:09.830041] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3893589) - No such process 00:06:02.000 ERROR: process (pid: 3893589) is no longer running 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3893086 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3893086 00:06:02.000 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.571 lslocks: write error 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3893086 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3893086 ']' 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3893086 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3893086 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3893086' 00:06:02.571 killing process with pid 3893086 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3893086 00:06:02.571 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3893086 00:06:02.833 00:06:02.833 real 0m2.197s 00:06:02.833 user 0m2.493s 00:06:02.833 sys 0m0.613s 00:06:02.833 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.833 11:00:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.833 ************************************ 00:06:02.833 END TEST locking_app_on_locked_coremask 00:06:02.833 ************************************ 00:06:02.833 11:00:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:02.833 11:00:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.833 11:00:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.833 11:00:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.833 ************************************ 00:06:02.833 START TEST locking_overlapped_coremask 00:06:02.833 ************************************ 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3893989 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3893989 /var/tmp/spdk.sock 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3893989 ']' 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.833 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.833 [2024-11-19 11:00:11.113729] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:02.833 [2024-11-19 11:00:11.113784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893989 ] 00:06:03.094 [2024-11-19 11:00:11.198024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.094 [2024-11-19 11:00:11.240556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.094 [2024-11-19 11:00:11.240672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.094 [2024-11-19 11:00:11.240675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3894163 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3894163 /var/tmp/spdk2.sock 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3894163 /var/tmp/spdk2.sock 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3894163 /var/tmp/spdk2.sock 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3894163 ']' 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.665 11:00:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.665 [2024-11-19 11:00:11.960673] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:03.665 [2024-11-19 11:00:11.960727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894163 ] 00:06:03.925 [2024-11-19 11:00:12.059602] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3893989 has claimed it. 00:06:03.925 [2024-11-19 11:00:12.059637] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3894163) - No such process 00:06:04.496 ERROR: process (pid: 3894163) is no longer running 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3893989 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3893989 ']' 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3893989 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3893989 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3893989' 00:06:04.496 killing process with pid 3893989 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3893989 00:06:04.496 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3893989 00:06:04.757 00:06:04.757 real 0m1.813s 00:06:04.757 user 0m5.210s 00:06:04.757 sys 0m0.400s 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.757 ************************************ 00:06:04.757 END TEST locking_overlapped_coremask 00:06:04.757 ************************************ 00:06:04.757 11:00:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:04.757 11:00:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.757 11:00:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.757 11:00:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.757 ************************************ 00:06:04.757 START TEST locking_overlapped_coremask_via_rpc 00:06:04.757 ************************************ 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3894467 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3894467 /var/tmp/spdk.sock 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3894467 ']' 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.757 11:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.757 [2024-11-19 11:00:13.000761] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:04.757 [2024-11-19 11:00:13.000816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894467 ] 00:06:04.757 [2024-11-19 11:00:13.080399] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.757 [2024-11-19 11:00:13.080434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.017 [2024-11-19 11:00:13.119566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.017 [2024-11-19 11:00:13.119681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.017 [2024-11-19 11:00:13.119683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3894529 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3894529 /var/tmp/spdk2.sock 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3894529 ']' 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.588 11:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.588 [2024-11-19 11:00:13.842432] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:05.588 [2024-11-19 11:00:13.842480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894529 ] 00:06:05.848 [2024-11-19 11:00:13.941658] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.848 [2024-11-19 11:00:13.941684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.848 [2024-11-19 11:00:14.000689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.848 [2024-11-19 11:00:14.003985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.848 [2024-11-19 11:00:14.003987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.417 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.418 [2024-11-19 11:00:14.652925] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3894467 has claimed it. 00:06:06.418 request: 00:06:06.418 { 00:06:06.418 "method": "framework_enable_cpumask_locks", 00:06:06.418 "req_id": 1 00:06:06.418 } 00:06:06.418 Got JSON-RPC error response 00:06:06.418 response: 00:06:06.418 { 00:06:06.418 "code": -32603, 00:06:06.418 "message": "Failed to claim CPU core: 2" 00:06:06.418 } 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3894467 /var/tmp/spdk.sock 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3894467 ']' 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.418 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3894529 /var/tmp/spdk2.sock 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3894529 ']' 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.678 11:00:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.678 11:00:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.678 11:00:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.678 11:00:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:06.678 11:00:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.678 11:00:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.678 11:00:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.678 00:06:06.678 real 0m2.080s 00:06:06.678 user 0m0.869s 00:06:06.678 sys 0m0.144s 00:06:06.678 11:00:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.678 11:00:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.678 ************************************ 00:06:06.678 END TEST locking_overlapped_coremask_via_rpc 00:06:06.678 ************************************ 00:06:06.939 11:00:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:06.939 11:00:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3894467 ]] 00:06:06.939 11:00:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3894467 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3894467 ']' 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3894467 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3894467 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3894467' 00:06:06.939 killing process with pid 3894467 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3894467 00:06:06.939 11:00:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3894467 00:06:07.199 11:00:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3894529 ]] 00:06:07.199 11:00:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3894529 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3894529 ']' 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3894529 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3894529 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3894529' 00:06:07.199 killing process with pid 3894529 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3894529 00:06:07.199 11:00:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3894529 00:06:07.459 11:00:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.459 11:00:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:07.459 11:00:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3894467 ]] 00:06:07.459 11:00:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3894467 00:06:07.459 11:00:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3894467 ']' 00:06:07.459 11:00:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3894467 00:06:07.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3894467) - No such process 00:06:07.459 11:00:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3894467 is not found' 00:06:07.459 Process with pid 3894467 is not found 00:06:07.459 11:00:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3894529 ]] 00:06:07.459 11:00:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3894529 00:06:07.459 11:00:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3894529 ']' 00:06:07.459 11:00:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3894529 00:06:07.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3894529) - No such process 00:06:07.459 11:00:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3894529 is not found' 00:06:07.459 Process with pid 3894529 is not found 00:06:07.459 11:00:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.459 00:06:07.459 real 0m16.250s 00:06:07.459 user 0m28.484s 00:06:07.459 sys 0m4.909s 00:06:07.459 11:00:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.459 11:00:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.459 ************************************ 00:06:07.459 END TEST cpu_locks 00:06:07.459 ************************************ 00:06:07.460 00:06:07.460 real 0m41.204s 00:06:07.460 user 1m19.147s 00:06:07.460 sys 0m8.282s 00:06:07.460 11:00:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.460 11:00:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.460 ************************************ 00:06:07.460 END TEST event 00:06:07.460 ************************************ 00:06:07.460 11:00:15 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.460 11:00:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.460 11:00:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.460 11:00:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.460 ************************************ 00:06:07.460 START TEST thread 00:06:07.460 ************************************ 00:06:07.460 11:00:15 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.460 * Looking for test storage... 00:06:07.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:07.720 11:00:15 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.720 11:00:15 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.720 11:00:15 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.720 11:00:15 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.720 11:00:15 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.720 11:00:15 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.720 11:00:15 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.720 11:00:15 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.720 11:00:15 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.720 11:00:15 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.720 11:00:15 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.720 11:00:15 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.720 11:00:15 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.720 11:00:15 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.720 11:00:15 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.720 11:00:15 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:07.720 11:00:15 thread -- scripts/common.sh@345 -- # : 1 00:06:07.720 11:00:15 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.720 11:00:15 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.720 11:00:15 thread -- scripts/common.sh@365 -- # decimal 1 00:06:07.720 11:00:15 thread -- scripts/common.sh@353 -- # local d=1 00:06:07.720 11:00:15 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.720 11:00:15 thread -- scripts/common.sh@355 -- # echo 1 00:06:07.720 11:00:15 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.720 11:00:15 thread -- scripts/common.sh@366 -- # decimal 2 00:06:07.720 11:00:15 thread -- scripts/common.sh@353 -- # local d=2 00:06:07.720 11:00:15 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.720 11:00:15 thread -- scripts/common.sh@355 -- # echo 2 00:06:07.720 11:00:15 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.721 11:00:15 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.721 11:00:15 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.721 11:00:15 thread -- scripts/common.sh@368 -- # return 0 00:06:07.721 11:00:15 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.721 11:00:15 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.721 --rc genhtml_branch_coverage=1 00:06:07.721 --rc genhtml_function_coverage=1 00:06:07.721 --rc genhtml_legend=1 00:06:07.721 --rc geninfo_all_blocks=1 00:06:07.721 --rc geninfo_unexecuted_blocks=1 00:06:07.721 00:06:07.721 ' 00:06:07.721 11:00:15 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.721 --rc genhtml_branch_coverage=1 00:06:07.721 --rc genhtml_function_coverage=1 00:06:07.721 --rc genhtml_legend=1 00:06:07.721 --rc geninfo_all_blocks=1 00:06:07.721 --rc geninfo_unexecuted_blocks=1 00:06:07.721 00:06:07.721 ' 00:06:07.721 11:00:15 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.721 --rc genhtml_branch_coverage=1 00:06:07.721 --rc genhtml_function_coverage=1 00:06:07.721 --rc genhtml_legend=1 00:06:07.721 --rc geninfo_all_blocks=1 00:06:07.721 --rc geninfo_unexecuted_blocks=1 00:06:07.721 00:06:07.721 ' 00:06:07.721 11:00:15 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.721 --rc genhtml_branch_coverage=1 00:06:07.721 --rc genhtml_function_coverage=1 00:06:07.721 --rc genhtml_legend=1 00:06:07.721 --rc geninfo_all_blocks=1 00:06:07.721 --rc geninfo_unexecuted_blocks=1 00:06:07.721 00:06:07.721 ' 00:06:07.721 11:00:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.721 11:00:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:07.721 11:00:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.721 11:00:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.721 ************************************ 00:06:07.721 START TEST thread_poller_perf 00:06:07.721 ************************************ 00:06:07.721 11:00:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.721 [2024-11-19 11:00:15.974919] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:07.721 [2024-11-19 11:00:15.975031] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895053 ] 00:06:07.721 [2024-11-19 11:00:16.060694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.981 [2024-11-19 11:00:16.102909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.981 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:08.923 [2024-11-19T10:00:17.275Z] ====================================== 00:06:08.923 [2024-11-19T10:00:17.275Z] busy:2413659548 (cyc) 00:06:08.923 [2024-11-19T10:00:17.275Z] total_run_count: 288000 00:06:08.923 [2024-11-19T10:00:17.275Z] tsc_hz: 2400000000 (cyc) 00:06:08.923 [2024-11-19T10:00:17.275Z] ====================================== 00:06:08.923 [2024-11-19T10:00:17.275Z] poller_cost: 8380 (cyc), 3491 (nsec) 00:06:08.923 00:06:08.923 real 0m1.192s 00:06:08.923 user 0m1.109s 00:06:08.923 sys 0m0.078s 00:06:08.923 11:00:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.923 11:00:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.923 ************************************ 00:06:08.923 END TEST thread_poller_perf 00:06:08.923 ************************************ 00:06:08.923 11:00:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.923 11:00:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.923 11:00:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.923 11:00:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.923 ************************************ 00:06:08.923 START TEST thread_poller_perf 00:06:08.923 ************************************ 00:06:08.923 11:00:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.923 [2024-11-19 11:00:17.242826] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:08.923 [2024-11-19 11:00:17.242917] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895339 ] 00:06:09.183 [2024-11-19 11:00:17.323400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.183 [2024-11-19 11:00:17.357166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.183 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:10.125 [2024-11-19T10:00:18.477Z] ====================================== 00:06:10.125 [2024-11-19T10:00:18.477Z] busy:2402074554 (cyc) 00:06:10.125 [2024-11-19T10:00:18.477Z] total_run_count: 3812000 00:06:10.125 [2024-11-19T10:00:18.477Z] tsc_hz: 2400000000 (cyc) 00:06:10.125 [2024-11-19T10:00:18.477Z] ====================================== 00:06:10.125 [2024-11-19T10:00:18.477Z] poller_cost: 630 (cyc), 262 (nsec) 00:06:10.125 00:06:10.125 real 0m1.168s 00:06:10.125 user 0m1.097s 00:06:10.125 sys 0m0.067s 00:06:10.125 11:00:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.125 11:00:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.125 ************************************ 00:06:10.126 END TEST thread_poller_perf 00:06:10.126 ************************************ 00:06:10.126 11:00:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:10.126 00:06:10.126 real 0m2.717s 00:06:10.126 user 0m2.376s 00:06:10.126 sys 0m0.356s 00:06:10.126 11:00:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.126 11:00:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.126 ************************************ 00:06:10.126 END TEST thread 00:06:10.126 ************************************ 00:06:10.126 11:00:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:10.126 11:00:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:10.126 11:00:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.126 11:00:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.126 11:00:18 -- common/autotest_common.sh@10 -- # set +x 00:06:10.386 ************************************ 00:06:10.386 START TEST app_cmdline 00:06:10.386 ************************************ 00:06:10.386 11:00:18 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:10.386 * Looking for test storage... 00:06:10.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:10.386 11:00:18 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.386 11:00:18 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.386 11:00:18 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.386 11:00:18 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.386 11:00:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:10.387 11:00:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:10.387 11:00:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.387 11:00:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:10.387 11:00:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.387 11:00:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.387 11:00:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.387 11:00:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.387 --rc genhtml_branch_coverage=1 00:06:10.387 --rc genhtml_function_coverage=1 00:06:10.387 --rc genhtml_legend=1 00:06:10.387 --rc geninfo_all_blocks=1 00:06:10.387 --rc geninfo_unexecuted_blocks=1 00:06:10.387 00:06:10.387 ' 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.387 --rc genhtml_branch_coverage=1 00:06:10.387 --rc genhtml_function_coverage=1 00:06:10.387 --rc genhtml_legend=1 00:06:10.387 --rc geninfo_all_blocks=1 00:06:10.387 --rc geninfo_unexecuted_blocks=1 00:06:10.387 00:06:10.387 ' 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.387 --rc genhtml_branch_coverage=1 00:06:10.387 --rc genhtml_function_coverage=1 00:06:10.387 --rc genhtml_legend=1 00:06:10.387 --rc geninfo_all_blocks=1 00:06:10.387 --rc geninfo_unexecuted_blocks=1 00:06:10.387 00:06:10.387 ' 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.387 --rc genhtml_branch_coverage=1 00:06:10.387 --rc genhtml_function_coverage=1 00:06:10.387 --rc genhtml_legend=1 00:06:10.387 --rc geninfo_all_blocks=1 00:06:10.387 --rc geninfo_unexecuted_blocks=1 00:06:10.387 00:06:10.387 ' 00:06:10.387 11:00:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:10.387 11:00:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3895734 00:06:10.387 11:00:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3895734 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3895734 ']' 00:06:10.387 11:00:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.387 11:00:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.647 [2024-11-19 11:00:18.763757] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:10.647 [2024-11-19 11:00:18.763812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895734 ] 00:06:10.647 [2024-11-19 11:00:18.842495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.647 [2024-11-19 11:00:18.878260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.223 11:00:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.223 11:00:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:11.223 11:00:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:11.483 { 00:06:11.483 "version": "SPDK v25.01-pre git sha1 029355612", 00:06:11.483 "fields": { 00:06:11.483 "major": 25, 00:06:11.483 "minor": 1, 00:06:11.483 "patch": 0, 00:06:11.483 "suffix": "-pre", 00:06:11.483 "commit": "029355612" 00:06:11.483 } 00:06:11.483 } 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.483 11:00:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:11.483 11:00:19 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.744 request: 00:06:11.744 { 00:06:11.744 "method": "env_dpdk_get_mem_stats", 00:06:11.744 "req_id": 1 00:06:11.744 } 00:06:11.744 Got JSON-RPC error response 00:06:11.744 response: 00:06:11.744 { 00:06:11.744 "code": -32601, 00:06:11.744 "message": "Method not found" 00:06:11.744 } 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.744 11:00:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3895734 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3895734 ']' 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3895734 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.744 11:00:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3895734 00:06:11.744 11:00:20 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.744 11:00:20 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.744 11:00:20 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3895734' 00:06:11.744 killing process with pid 3895734 00:06:11.744 11:00:20 app_cmdline -- common/autotest_common.sh@973 -- # kill 3895734 00:06:11.744 11:00:20 app_cmdline -- common/autotest_common.sh@978 -- # wait 3895734 00:06:12.005 00:06:12.005 real 0m1.738s 00:06:12.005 user 0m2.097s 00:06:12.005 sys 0m0.449s 00:06:12.005 11:00:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.005 11:00:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.005 ************************************ 00:06:12.005 END TEST app_cmdline 00:06:12.005 ************************************ 00:06:12.005 11:00:20 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.005 11:00:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.005 11:00:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.005 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:06:12.005 ************************************ 00:06:12.005 START TEST version 00:06:12.005 ************************************ 00:06:12.005 11:00:20 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.266 * Looking for test storage... 00:06:12.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.266 11:00:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.266 11:00:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.266 11:00:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.266 11:00:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.266 11:00:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.266 11:00:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.266 11:00:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.266 11:00:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.266 11:00:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.266 11:00:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.266 11:00:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.266 11:00:20 version -- scripts/common.sh@344 -- # case "$op" in 00:06:12.266 11:00:20 version -- scripts/common.sh@345 -- # : 1 00:06:12.266 11:00:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.266 11:00:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.266 11:00:20 version -- scripts/common.sh@365 -- # decimal 1 00:06:12.266 11:00:20 version -- scripts/common.sh@353 -- # local d=1 00:06:12.266 11:00:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.266 11:00:20 version -- scripts/common.sh@355 -- # echo 1 00:06:12.266 11:00:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.266 11:00:20 version -- scripts/common.sh@366 -- # decimal 2 00:06:12.266 11:00:20 version -- scripts/common.sh@353 -- # local d=2 00:06:12.266 11:00:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.266 11:00:20 version -- scripts/common.sh@355 -- # echo 2 00:06:12.266 11:00:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.266 11:00:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.266 11:00:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.266 11:00:20 version -- scripts/common.sh@368 -- # return 0 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.266 --rc genhtml_branch_coverage=1 00:06:12.266 --rc genhtml_function_coverage=1 00:06:12.266 --rc genhtml_legend=1 00:06:12.266 --rc geninfo_all_blocks=1 00:06:12.266 --rc geninfo_unexecuted_blocks=1 00:06:12.266 00:06:12.266 ' 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.266 --rc genhtml_branch_coverage=1 00:06:12.266 --rc genhtml_function_coverage=1 00:06:12.266 --rc genhtml_legend=1 00:06:12.266 --rc geninfo_all_blocks=1 00:06:12.266 --rc geninfo_unexecuted_blocks=1 00:06:12.266 00:06:12.266 ' 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.266 --rc genhtml_branch_coverage=1 00:06:12.266 --rc genhtml_function_coverage=1 00:06:12.266 --rc genhtml_legend=1 00:06:12.266 --rc geninfo_all_blocks=1 00:06:12.266 --rc geninfo_unexecuted_blocks=1 00:06:12.266 00:06:12.266 ' 00:06:12.266 11:00:20 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.266 --rc genhtml_branch_coverage=1 00:06:12.266 --rc genhtml_function_coverage=1 00:06:12.266 --rc genhtml_legend=1 00:06:12.266 --rc geninfo_all_blocks=1 00:06:12.266 --rc geninfo_unexecuted_blocks=1 00:06:12.266 00:06:12.266 ' 00:06:12.266 11:00:20 version -- app/version.sh@17 -- # get_header_version major 00:06:12.266 11:00:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.266 11:00:20 version -- app/version.sh@14 -- # cut -f2 00:06:12.266 11:00:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.266 11:00:20 version -- app/version.sh@17 -- # major=25 00:06:12.266 11:00:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:12.266 11:00:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.266 11:00:20 version -- app/version.sh@14 -- # cut -f2 00:06:12.266 11:00:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.266 11:00:20 version -- app/version.sh@18 -- # minor=1 00:06:12.266 11:00:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:12.266 11:00:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.266 11:00:20 version -- app/version.sh@14 -- # cut -f2 00:06:12.266 11:00:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.266 11:00:20 version -- app/version.sh@19 -- # patch=0 00:06:12.266 11:00:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:12.266 11:00:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.266 11:00:20 version -- app/version.sh@14 -- # cut -f2 00:06:12.266 11:00:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.266 11:00:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:12.266 11:00:20 version -- app/version.sh@22 -- # version=25.1 00:06:12.266 11:00:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.266 11:00:20 version -- app/version.sh@28 -- # version=25.1rc0 00:06:12.267 11:00:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:12.267 11:00:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:12.267 11:00:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:12.267 11:00:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:12.267 00:06:12.267 real 0m0.275s 00:06:12.267 user 0m0.161s 00:06:12.267 sys 0m0.163s 00:06:12.267 11:00:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.267 11:00:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:12.267 ************************************ 00:06:12.267 END TEST version 00:06:12.267 ************************************ 00:06:12.527 11:00:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:12.527 11:00:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:12.527 11:00:20 -- spdk/autotest.sh@194 -- # uname -s 00:06:12.527 11:00:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:12.527 11:00:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.527 11:00:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.527 11:00:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:12.527 11:00:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:12.528 11:00:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:12.528 11:00:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.528 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:06:12.528 11:00:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:12.528 11:00:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:12.528 11:00:20 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:12.528 11:00:20 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:12.528 11:00:20 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:12.528 11:00:20 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:12.528 11:00:20 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.528 11:00:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.528 11:00:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.528 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:06:12.528 ************************************ 00:06:12.528 START TEST nvmf_tcp 00:06:12.528 ************************************ 00:06:12.528 11:00:20 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.528 * Looking for test storage... 00:06:12.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:12.528 11:00:20 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.528 11:00:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.528 11:00:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.790 11:00:20 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.790 --rc genhtml_branch_coverage=1 00:06:12.790 --rc genhtml_function_coverage=1 00:06:12.790 --rc genhtml_legend=1 00:06:12.790 --rc geninfo_all_blocks=1 00:06:12.790 --rc geninfo_unexecuted_blocks=1 00:06:12.790 00:06:12.790 ' 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.790 --rc genhtml_branch_coverage=1 00:06:12.790 --rc genhtml_function_coverage=1 00:06:12.790 --rc genhtml_legend=1 00:06:12.790 --rc geninfo_all_blocks=1 00:06:12.790 --rc geninfo_unexecuted_blocks=1 00:06:12.790 00:06:12.790 ' 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.790 --rc genhtml_branch_coverage=1 00:06:12.790 --rc genhtml_function_coverage=1 00:06:12.790 --rc genhtml_legend=1 00:06:12.790 --rc geninfo_all_blocks=1 00:06:12.790 --rc geninfo_unexecuted_blocks=1 00:06:12.790 00:06:12.790 ' 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.790 --rc genhtml_branch_coverage=1 00:06:12.790 --rc genhtml_function_coverage=1 00:06:12.790 --rc genhtml_legend=1 00:06:12.790 --rc geninfo_all_blocks=1 00:06:12.790 --rc geninfo_unexecuted_blocks=1 00:06:12.790 00:06:12.790 ' 00:06:12.790 11:00:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:12.790 11:00:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:12.790 11:00:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.790 11:00:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.790 ************************************ 00:06:12.790 START TEST nvmf_target_core 00:06:12.790 ************************************ 00:06:12.790 11:00:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:12.790 * Looking for test storage... 00:06:12.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:12.790 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.790 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.790 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.052 --rc genhtml_branch_coverage=1 00:06:13.052 --rc genhtml_function_coverage=1 00:06:13.052 --rc genhtml_legend=1 00:06:13.052 --rc geninfo_all_blocks=1 00:06:13.052 --rc geninfo_unexecuted_blocks=1 00:06:13.052 00:06:13.052 ' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.052 --rc genhtml_branch_coverage=1 00:06:13.052 --rc genhtml_function_coverage=1 00:06:13.052 --rc genhtml_legend=1 00:06:13.052 --rc geninfo_all_blocks=1 00:06:13.052 --rc geninfo_unexecuted_blocks=1 00:06:13.052 00:06:13.052 ' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.052 --rc genhtml_branch_coverage=1 00:06:13.052 --rc genhtml_function_coverage=1 00:06:13.052 --rc genhtml_legend=1 00:06:13.052 --rc geninfo_all_blocks=1 00:06:13.052 --rc geninfo_unexecuted_blocks=1 00:06:13.052 00:06:13.052 ' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.052 --rc genhtml_branch_coverage=1 00:06:13.052 --rc genhtml_function_coverage=1 00:06:13.052 --rc genhtml_legend=1 00:06:13.052 --rc geninfo_all_blocks=1 00:06:13.052 --rc geninfo_unexecuted_blocks=1 00:06:13.052 00:06:13.052 ' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.052 11:00:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:13.053 ************************************ 00:06:13.053 START TEST nvmf_abort 00:06:13.053 ************************************ 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:13.053 * Looking for test storage... 00:06:13.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.053 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.314 --rc genhtml_branch_coverage=1 00:06:13.314 --rc genhtml_function_coverage=1 00:06:13.314 --rc genhtml_legend=1 00:06:13.314 --rc geninfo_all_blocks=1 00:06:13.314 --rc geninfo_unexecuted_blocks=1 00:06:13.314 00:06:13.314 ' 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.314 --rc genhtml_branch_coverage=1 00:06:13.314 --rc genhtml_function_coverage=1 00:06:13.314 --rc genhtml_legend=1 00:06:13.314 --rc geninfo_all_blocks=1 00:06:13.314 --rc geninfo_unexecuted_blocks=1 00:06:13.314 00:06:13.314 ' 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.314 --rc genhtml_branch_coverage=1 00:06:13.314 --rc genhtml_function_coverage=1 00:06:13.314 --rc genhtml_legend=1 00:06:13.314 --rc geninfo_all_blocks=1 00:06:13.314 --rc geninfo_unexecuted_blocks=1 00:06:13.314 00:06:13.314 ' 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.314 --rc genhtml_branch_coverage=1 00:06:13.314 --rc genhtml_function_coverage=1 00:06:13.314 --rc genhtml_legend=1 00:06:13.314 --rc geninfo_all_blocks=1 00:06:13.314 --rc geninfo_unexecuted_blocks=1 00:06:13.314 00:06:13.314 ' 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.314 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:13.315 11:00:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:21.453 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:21.453 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:21.453 Found net devices under 0000:31:00.0: cvl_0_0 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.453 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:21.454 Found net devices under 0000:31:00.1: cvl_0_1 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:21.454 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:21.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:06:21.715 00:06:21.715 --- 10.0.0.2 ping statistics --- 00:06:21.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.715 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:06:21.715 00:06:21.715 --- 10.0.0.1 ping statistics --- 00:06:21.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.715 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3900817 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3900817 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3900817 ']' 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.715 11:00:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.715 [2024-11-19 11:00:30.035249] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:21.715 [2024-11-19 11:00:30.035317] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.976 [2024-11-19 11:00:30.146602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.976 [2024-11-19 11:00:30.200785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.976 [2024-11-19 11:00:30.200841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.976 [2024-11-19 11:00:30.200851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.976 [2024-11-19 11:00:30.200859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.976 [2024-11-19 11:00:30.200878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.976 [2024-11-19 11:00:30.202756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.976 [2024-11-19 11:00:30.202924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.976 [2024-11-19 11:00:30.202947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.546 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.807 [2024-11-19 11:00:30.898694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.807 Malloc0 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.807 Delay0 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.807 [2024-11-19 11:00:30.976029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.807 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:22.807 [2024-11-19 11:00:31.065232] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:25.353 Initializing NVMe Controllers 00:06:25.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:25.353 controller IO queue size 128 less than required 00:06:25.353 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:25.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:25.353 Initialization complete. Launching workers. 00:06:25.353 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29100 00:06:25.353 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29165, failed to submit 62 00:06:25.353 success 29104, unsuccessful 61, failed 0 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:25.353 rmmod nvme_tcp 00:06:25.353 rmmod nvme_fabrics 00:06:25.353 rmmod nvme_keyring 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3900817 ']' 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3900817 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3900817 ']' 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3900817 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:25.353 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900817 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900817' 00:06:25.354 killing process with pid 3900817 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3900817 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3900817 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.354 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:27.898 00:06:27.898 real 0m14.405s 00:06:27.898 user 0m14.421s 00:06:27.898 sys 0m7.291s 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:27.898 ************************************ 00:06:27.898 END TEST nvmf_abort 00:06:27.898 ************************************ 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.898 ************************************ 00:06:27.898 START TEST nvmf_ns_hotplug_stress 00:06:27.898 ************************************ 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:27.898 * Looking for test storage... 00:06:27.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.898 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.899 --rc genhtml_branch_coverage=1 00:06:27.899 --rc genhtml_function_coverage=1 00:06:27.899 --rc genhtml_legend=1 00:06:27.899 --rc geninfo_all_blocks=1 00:06:27.899 --rc geninfo_unexecuted_blocks=1 00:06:27.899 00:06:27.899 ' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.899 --rc genhtml_branch_coverage=1 00:06:27.899 --rc genhtml_function_coverage=1 00:06:27.899 --rc genhtml_legend=1 00:06:27.899 --rc geninfo_all_blocks=1 00:06:27.899 --rc geninfo_unexecuted_blocks=1 00:06:27.899 00:06:27.899 ' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.899 --rc genhtml_branch_coverage=1 00:06:27.899 --rc genhtml_function_coverage=1 00:06:27.899 --rc genhtml_legend=1 00:06:27.899 --rc geninfo_all_blocks=1 00:06:27.899 --rc geninfo_unexecuted_blocks=1 00:06:27.899 00:06:27.899 ' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.899 --rc genhtml_branch_coverage=1 00:06:27.899 --rc genhtml_function_coverage=1 00:06:27.899 --rc genhtml_legend=1 00:06:27.899 --rc geninfo_all_blocks=1 00:06:27.899 --rc geninfo_unexecuted_blocks=1 00:06:27.899 00:06:27.899 ' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.899 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:27.900 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:36.036 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.036 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:36.037 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:36.037 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:36.037 Found net devices under 0000:31:00.0: cvl_0_0 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:36.037 Found net devices under 0000:31:00.1: cvl_0_1 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:36.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:36.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:06:36.037 00:06:36.037 --- 10.0.0.2 ping statistics --- 00:06:36.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.037 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:36.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:36.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:06:36.037 00:06:36.037 --- 10.0.0.1 ping statistics --- 00:06:36.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.037 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3906306 00:06:36.037 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3906306 00:06:36.038 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:36.038 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3906306 ']' 00:06:36.038 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.038 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.038 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.038 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.038 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:36.298 [2024-11-19 11:00:44.437285] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:06:36.298 [2024-11-19 11:00:44.437352] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.298 [2024-11-19 11:00:44.546379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.298 [2024-11-19 11:00:44.597459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.298 [2024-11-19 11:00:44.597512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.298 [2024-11-19 11:00:44.597520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.298 [2024-11-19 11:00:44.597528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.298 [2024-11-19 11:00:44.597534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.298 [2024-11-19 11:00:44.599641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.298 [2024-11-19 11:00:44.599808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.298 [2024-11-19 11:00:44.599808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:37.239 [2024-11-19 11:00:45.427889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.239 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:37.501 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.501 [2024-11-19 11:00:45.797365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.501 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:37.761 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:38.021 Malloc0 00:06:38.021 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:38.021 Delay0 00:06:38.282 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.282 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:38.542 NULL1 00:06:38.542 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:38.803 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:38.803 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3906817 00:06:38.803 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:38.803 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.803 11:00:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.063 11:00:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:39.063 11:00:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:39.324 true 00:06:39.324 11:00:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:39.324 11:00:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.324 11:00:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.584 11:00:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:39.584 11:00:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:39.844 true 00:06:39.844 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:39.845 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.845 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.105 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:40.105 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:40.365 true 00:06:40.365 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:40.365 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.625 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.625 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:40.625 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:40.885 true 00:06:40.885 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:40.885 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.144 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.144 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:41.144 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:41.404 true 00:06:41.404 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:41.404 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.666 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.666 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:41.666 11:00:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:41.927 true 00:06:41.927 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:41.927 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.188 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.449 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:42.449 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:42.449 true 00:06:42.449 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:42.449 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.762 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.762 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:42.762 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:43.022 true 00:06:43.022 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:43.022 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.282 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.543 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:43.543 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:43.543 true 00:06:43.543 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:43.543 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.804 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.065 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:44.065 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:44.065 true 00:06:44.065 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:44.065 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.325 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.586 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:44.586 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:44.586 true 00:06:44.586 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:44.586 11:00:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.847 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.107 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:45.107 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:45.107 true 00:06:45.369 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:45.369 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.369 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.629 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:45.630 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:45.890 true 00:06:45.890 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:45.890 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.890 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.150 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:46.151 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:46.411 true 00:06:46.411 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:46.411 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.411 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.672 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:46.672 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:46.933 true 00:06:46.933 11:00:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:46.933 11:00:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.194 11:00:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.194 11:00:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:47.194 11:00:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:47.454 true 00:06:47.454 11:00:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:47.454 11:00:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.714 11:00:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.714 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:47.714 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:47.974 true 00:06:47.974 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:47.974 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.235 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.235 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:48.235 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:48.496 true 00:06:48.496 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:48.496 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.757 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.017 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:49.017 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:49.017 true 00:06:49.017 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:49.017 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.278 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.539 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:49.539 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:49.539 true 00:06:49.539 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:49.539 11:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.799 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.060 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:50.060 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:50.060 true 00:06:50.060 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:50.060 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.321 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.581 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:50.581 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:50.581 true 00:06:50.581 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:50.581 11:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.841 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.103 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:51.103 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:51.103 true 00:06:51.364 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:51.364 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.364 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.625 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:51.625 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:51.886 true 00:06:51.886 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:51.886 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.886 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.147 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:52.147 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:52.415 true 00:06:52.415 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:52.415 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.415 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.676 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:52.676 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:52.936 true 00:06:52.936 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:52.936 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.197 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.197 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:53.198 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:53.458 true 00:06:53.458 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:53.458 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.718 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.718 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:53.718 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:53.979 true 00:06:53.979 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:53.979 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.239 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.239 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:54.239 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:54.500 true 00:06:54.500 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:54.500 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.765 11:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.026 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:55.026 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:55.026 true 00:06:55.026 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:55.026 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.287 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.547 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:55.547 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:55.547 true 00:06:55.547 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:55.547 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.808 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.068 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:56.068 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:56.068 true 00:06:56.328 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:56.328 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.329 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.589 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:56.589 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:56.849 true 00:06:56.849 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:56.849 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.849 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.110 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:57.110 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:57.370 true 00:06:57.370 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:57.370 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.370 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.631 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:57.631 11:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:57.891 true 00:06:57.891 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:57.891 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.891 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.151 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:58.151 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:58.412 true 00:06:58.412 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:58.412 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.674 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.674 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:58.674 11:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:58.936 true 00:06:58.936 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:58.936 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.196 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.196 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:59.196 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:59.457 true 00:06:59.457 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:59.457 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.718 11:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.718 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:59.718 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:59.979 true 00:06:59.979 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:06:59.979 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.239 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.500 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:00.500 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:00.500 true 00:07:00.500 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:00.500 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.760 11:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.020 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:01.020 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:01.020 true 00:07:01.020 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:01.020 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.279 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.538 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:01.538 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:01.538 true 00:07:01.538 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:01.538 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.798 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.059 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:02.059 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:02.059 true 00:07:02.059 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:02.059 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.319 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.579 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:02.579 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:02.579 true 00:07:02.841 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:02.841 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.841 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.102 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:03.102 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:03.363 true 00:07:03.363 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:03.363 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.363 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.624 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:03.624 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:03.885 true 00:07:03.885 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:03.885 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.885 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.146 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:04.146 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:04.407 true 00:07:04.407 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:04.407 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.668 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.668 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:04.668 11:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:04.929 true 00:07:04.929 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:04.929 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.190 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.190 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:05.190 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:05.452 true 00:07:05.452 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:05.452 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.713 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.713 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:05.713 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:05.974 true 00:07:05.974 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:05.974 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.235 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.496 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:06.496 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:06.496 true 00:07:06.496 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:06.496 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.757 11:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.018 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:07.018 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:07.018 true 00:07:07.018 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:07.018 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.279 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.541 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:07.541 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:07.541 true 00:07:07.802 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:07.802 11:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.802 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.062 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:08.062 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:08.323 true 00:07:08.323 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:08.323 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.323 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.584 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:08.584 11:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:08.845 true 00:07:08.845 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:08.845 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.845 Initializing NVMe Controllers 00:07:08.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.845 Controller IO queue size 128, less than required. 00:07:08.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:08.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:08.845 Initialization complete. Launching workers. 00:07:08.845 ======================================================== 00:07:08.845 Latency(us) 00:07:08.845 Device Information : IOPS MiB/s Average min max 00:07:08.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30817.23 15.05 4153.43 1438.16 8008.42 00:07:08.845 ======================================================== 00:07:08.845 Total : 30817.23 15.05 4153.43 1438.16 8008.42 00:07:08.845 00:07:08.845 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.106 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:09.106 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:09.367 true 00:07:09.367 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3906817 00:07:09.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3906817) - No such process 00:07:09.367 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3906817 00:07:09.367 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.627 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.627 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:09.627 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:09.627 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:09.627 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.627 11:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:09.887 null0 00:07:09.887 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.887 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.887 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:10.148 null1 00:07:10.148 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.148 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.148 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:10.148 null2 00:07:10.148 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.148 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.148 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:10.408 null3 00:07:10.408 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.408 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.408 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:10.669 null4 00:07:10.669 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.669 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.669 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:10.669 null5 00:07:10.669 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.669 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.669 11:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:10.929 null6 00:07:10.929 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.930 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.930 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:11.191 null7 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.191 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3913554 3913555 3913557 3913559 3913561 3913563 3913564 3913566 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.192 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.453 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.715 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.715 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.715 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.715 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.715 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.715 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.715 11:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.715 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.975 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.975 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.975 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.975 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.976 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.236 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.496 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.757 11:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.758 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.758 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.758 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.758 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.020 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.281 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.541 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.542 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.803 11:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.803 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.063 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.324 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.584 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.584 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.584 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.584 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.585 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.844 11:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.844 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.844 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.844 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.844 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.844 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.844 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.845 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.845 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.107 rmmod nvme_tcp 00:07:15.107 rmmod nvme_fabrics 00:07:15.107 rmmod nvme_keyring 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3906306 ']' 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3906306 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3906306 ']' 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3906306 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3906306 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3906306' 00:07:15.107 killing process with pid 3906306 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3906306 00:07:15.107 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3906306 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.367 11:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.279 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:17.279 00:07:17.279 real 0m49.833s 00:07:17.279 user 3m20.433s 00:07:17.279 sys 0m17.728s 00:07:17.279 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.279 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.279 ************************************ 00:07:17.279 END TEST nvmf_ns_hotplug_stress 00:07:17.279 ************************************ 00:07:17.279 11:01:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:17.279 11:01:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.279 11:01:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.279 11:01:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:17.541 ************************************ 00:07:17.541 START TEST nvmf_delete_subsystem 00:07:17.541 ************************************ 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:17.541 * Looking for test storage... 00:07:17.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.541 --rc genhtml_branch_coverage=1 00:07:17.541 --rc genhtml_function_coverage=1 00:07:17.541 --rc genhtml_legend=1 00:07:17.541 --rc geninfo_all_blocks=1 00:07:17.541 --rc geninfo_unexecuted_blocks=1 00:07:17.541 00:07:17.541 ' 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.541 --rc genhtml_branch_coverage=1 00:07:17.541 --rc genhtml_function_coverage=1 00:07:17.541 --rc genhtml_legend=1 00:07:17.541 --rc geninfo_all_blocks=1 00:07:17.541 --rc geninfo_unexecuted_blocks=1 00:07:17.541 00:07:17.541 ' 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.541 --rc genhtml_branch_coverage=1 00:07:17.541 --rc genhtml_function_coverage=1 00:07:17.541 --rc genhtml_legend=1 00:07:17.541 --rc geninfo_all_blocks=1 00:07:17.541 --rc geninfo_unexecuted_blocks=1 00:07:17.541 00:07:17.541 ' 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.541 --rc genhtml_branch_coverage=1 00:07:17.541 --rc genhtml_function_coverage=1 00:07:17.541 --rc genhtml_legend=1 00:07:17.541 --rc geninfo_all_blocks=1 00:07:17.541 --rc geninfo_unexecuted_blocks=1 00:07:17.541 00:07:17.541 ' 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.541 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:17.542 11:01:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:25.694 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:25.694 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:25.694 Found net devices under 0000:31:00.0: cvl_0_0 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:25.694 Found net devices under 0000:31:00.1: cvl_0_1 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.694 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.695 11:01:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:07:25.956 00:07:25.956 --- 10.0.0.2 ping statistics --- 00:07:25.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.956 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:07:25.956 00:07:25.956 --- 10.0.0.1 ping statistics --- 00:07:25.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.956 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3919195 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3919195 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3919195 ']' 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.956 11:01:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.956 [2024-11-19 11:01:34.224355] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:07:25.956 [2024-11-19 11:01:34.224409] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.217 [2024-11-19 11:01:34.313723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.217 [2024-11-19 11:01:34.350389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.217 [2024-11-19 11:01:34.350423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.217 [2024-11-19 11:01:34.350432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.217 [2024-11-19 11:01:34.350439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.217 [2024-11-19 11:01:34.350449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.217 [2024-11-19 11:01:34.351664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.217 [2024-11-19 11:01:34.351665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 [2024-11-19 11:01:35.060665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 [2024-11-19 11:01:35.084856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 NULL1 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 Delay0 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3919450 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:26.788 11:01:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:27.048 [2024-11-19 11:01:35.181650] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:29.024 11:01:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.024 11:01:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.024 11:01:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.024 starting I/O failed: -6 00:07:29.024 Read completed with error (sct=0, sc=8) 00:07:29.024 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 [2024-11-19 11:01:37.347686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146c2c0 is same with the state(6) to be set 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 starting I/O failed: -6 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 [2024-11-19 11:01:37.350050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d9c000c40 is same with the state(6) to be set 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Write completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:29.025 Read completed with error (sct=0, sc=8) 00:07:30.026 [2024-11-19 11:01:38.321897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146d5e0 is same with the state(6) to be set 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Write completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Write completed with error (sct=0, sc=8) 00:07:30.026 Write completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Write completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Write completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Write completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Read completed with error (sct=0, sc=8) 00:07:30.026 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 [2024-11-19 11:01:38.351833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146c4a0 is same with the state(6) to be set 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 [2024-11-19 11:01:38.351931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146c0e0 is same with the state(6) to be set 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 [2024-11-19 11:01:38.352435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d9c00d020 is same with the state(6) to be set 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Write completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 Read completed with error (sct=0, sc=8) 00:07:30.027 [2024-11-19 11:01:38.352554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d9c00d7e0 is same with the state(6) to be set 00:07:30.027 Initializing NVMe Controllers 00:07:30.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.027 Controller IO queue size 128, less than required. 00:07:30.027 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:30.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:30.027 Initialization complete. Launching workers. 00:07:30.027 ======================================================== 00:07:30.027 Latency(us) 00:07:30.027 Device Information : IOPS MiB/s Average min max 00:07:30.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.73 0.08 898474.36 234.11 1044015.93 00:07:30.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.79 0.08 978698.12 286.24 2002177.56 00:07:30.027 ======================================================== 00:07:30.027 Total : 324.51 0.16 936986.69 234.11 2002177.56 00:07:30.027 00:07:30.027 [2024-11-19 11:01:38.353142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146d5e0 (9): Bad file descriptor 00:07:30.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:30.027 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.027 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:30.027 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3919450 00:07:30.027 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3919450 00:07:30.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3919450) - No such process 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3919450 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3919450 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3919450 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.597 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.598 [2024-11-19 11:01:38.884317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3920145 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3920145 00:07:30.598 11:01:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.858 [2024-11-19 11:01:38.963620] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:31.118 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.118 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3920145 00:07:31.118 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.688 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.688 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3920145 00:07:31.688 11:01:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.258 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.258 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3920145 00:07:32.258 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.829 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.829 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3920145 00:07:32.829 11:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.089 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.089 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3920145 00:07:33.089 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.658 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.658 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3920145 00:07:33.658 11:01:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.918 Initializing NVMe Controllers 00:07:33.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.918 Controller IO queue size 128, less than required. 00:07:33.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:33.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:33.918 Initialization complete. Launching workers. 00:07:33.918 ======================================================== 00:07:33.918 Latency(us) 00:07:33.918 Device Information : IOPS MiB/s Average min max 00:07:33.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002039.17 1000174.78 1006160.14 00:07:33.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002973.43 1000176.77 1009730.70 00:07:33.918 ======================================================== 00:07:33.918 Total : 256.00 0.12 1002506.30 1000174.78 1009730.70 00:07:33.918 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3920145 00:07:34.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3920145) - No such process 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3920145 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:34.177 rmmod nvme_tcp 00:07:34.177 rmmod nvme_fabrics 00:07:34.177 rmmod nvme_keyring 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3919195 ']' 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3919195 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3919195 ']' 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3919195 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.177 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3919195 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3919195' 00:07:34.437 killing process with pid 3919195 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3919195 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3919195 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.437 11:01:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:36.982 00:07:36.982 real 0m19.136s 00:07:36.982 user 0m30.919s 00:07:36.982 sys 0m7.363s 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.982 ************************************ 00:07:36.982 END TEST nvmf_delete_subsystem 00:07:36.982 ************************************ 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:36.982 ************************************ 00:07:36.982 START TEST nvmf_host_management 00:07:36.982 ************************************ 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:36.982 * Looking for test storage... 00:07:36.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.982 11:01:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.982 --rc genhtml_branch_coverage=1 00:07:36.982 --rc genhtml_function_coverage=1 00:07:36.982 --rc genhtml_legend=1 00:07:36.982 --rc geninfo_all_blocks=1 00:07:36.982 --rc geninfo_unexecuted_blocks=1 00:07:36.982 00:07:36.982 ' 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.982 --rc genhtml_branch_coverage=1 00:07:36.982 --rc genhtml_function_coverage=1 00:07:36.982 --rc genhtml_legend=1 00:07:36.982 --rc geninfo_all_blocks=1 00:07:36.982 --rc geninfo_unexecuted_blocks=1 00:07:36.982 00:07:36.982 ' 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.982 --rc genhtml_branch_coverage=1 00:07:36.982 --rc genhtml_function_coverage=1 00:07:36.982 --rc genhtml_legend=1 00:07:36.982 --rc geninfo_all_blocks=1 00:07:36.982 --rc geninfo_unexecuted_blocks=1 00:07:36.982 00:07:36.982 ' 00:07:36.982 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.982 --rc genhtml_branch_coverage=1 00:07:36.983 --rc genhtml_function_coverage=1 00:07:36.983 --rc genhtml_legend=1 00:07:36.983 --rc geninfo_all_blocks=1 00:07:36.983 --rc geninfo_unexecuted_blocks=1 00:07:36.983 00:07:36.983 ' 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:36.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:36.983 11:01:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:45.123 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:45.123 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:45.123 Found net devices under 0000:31:00.0: cvl_0_0 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.123 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:45.124 Found net devices under 0000:31:00.1: cvl_0_1 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:07:45.124 00:07:45.124 --- 10.0.0.2 ping statistics --- 00:07:45.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.124 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:07:45.124 00:07:45.124 --- 10.0.0.1 ping statistics --- 00:07:45.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.124 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:45.124 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3925836 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3925836 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3925836 ']' 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.385 11:01:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.385 [2024-11-19 11:01:53.537609] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:07:45.385 [2024-11-19 11:01:53.537659] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.385 [2024-11-19 11:01:53.643681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.386 [2024-11-19 11:01:53.693938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.386 [2024-11-19 11:01:53.693989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.386 [2024-11-19 11:01:53.693997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.386 [2024-11-19 11:01:53.694005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.386 [2024-11-19 11:01:53.694012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.386 [2024-11-19 11:01:53.696107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.386 [2024-11-19 11:01:53.696273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.386 [2024-11-19 11:01:53.696438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.386 [2024-11-19 11:01:53.696438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.328 [2024-11-19 11:01:54.384154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.328 Malloc0 00:07:46.328 [2024-11-19 11:01:54.466619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3925907 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3925907 /var/tmp/bdevperf.sock 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3925907 ']' 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:46.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:46.328 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:46.328 { 00:07:46.328 "params": { 00:07:46.328 "name": "Nvme$subsystem", 00:07:46.328 "trtype": "$TEST_TRANSPORT", 00:07:46.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.328 "adrfam": "ipv4", 00:07:46.328 "trsvcid": "$NVMF_PORT", 00:07:46.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.328 "hdgst": ${hdgst:-false}, 00:07:46.329 "ddgst": ${ddgst:-false} 00:07:46.329 }, 00:07:46.329 "method": "bdev_nvme_attach_controller" 00:07:46.329 } 00:07:46.329 EOF 00:07:46.329 )") 00:07:46.329 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:46.329 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:46.329 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:46.329 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:46.329 "params": { 00:07:46.329 "name": "Nvme0", 00:07:46.329 "trtype": "tcp", 00:07:46.329 "traddr": "10.0.0.2", 00:07:46.329 "adrfam": "ipv4", 00:07:46.329 "trsvcid": "4420", 00:07:46.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:46.329 "hdgst": false, 00:07:46.329 "ddgst": false 00:07:46.329 }, 00:07:46.329 "method": "bdev_nvme_attach_controller" 00:07:46.329 }' 00:07:46.329 [2024-11-19 11:01:54.581554] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:07:46.329 [2024-11-19 11:01:54.581619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925907 ] 00:07:46.329 [2024-11-19 11:01:54.660608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.589 [2024-11-19 11:01:54.697606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.589 Running I/O for 10 seconds... 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=849 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 849 -ge 100 ']' 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.162 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.162 [2024-11-19 11:01:55.437708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.437997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7530 is same with the state(6) to be set 00:07:47.162 [2024-11-19 11:01:55.438952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:47.162 [2024-11-19 11:01:55.438991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:47.163 [2024-11-19 11:01:55.439010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:47.163 [2024-11-19 11:01:55.439025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:47.163 [2024-11-19 11:01:55.439041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc8b00 is same with the state(6) to be set 00:07:47.163 [2024-11-19 11:01:55.439772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.439984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.439992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.163 [2024-11-19 11:01:55.440371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.163 [2024-11-19 11:01:55.440379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.440856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.164 [2024-11-19 11:01:55.440868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.442130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:47.164 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.164 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:47.164 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.164 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.164 task offset: 116736 on job bdev=Nvme0n1 fails 00:07:47.164 00:07:47.164 Latency(us) 00:07:47.164 [2024-11-19T10:01:55.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.164 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:47.164 Job: Nvme0n1 ended in about 0.56 seconds with error 00:07:47.164 Verification LBA range: start 0x0 length 0x400 00:07:47.164 Nvme0n1 : 0.56 1629.54 101.85 114.35 0.00 35769.77 1501.87 31457.28 00:07:47.164 [2024-11-19T10:01:55.516Z] =================================================================================================================== 00:07:47.164 [2024-11-19T10:01:55.516Z] Total : 1629.54 101.85 114.35 0.00 35769.77 1501.87 31457.28 00:07:47.164 [2024-11-19 11:01:55.444122] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.164 [2024-11-19 11:01:55.444144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc8b00 (9): Bad file descriptor 00:07:47.164 [2024-11-19 11:01:55.449538] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:47.164 [2024-11-19 11:01:55.449610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:47.164 [2024-11-19 11:01:55.449632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.164 [2024-11-19 11:01:55.449644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:47.164 [2024-11-19 11:01:55.449653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:47.164 [2024-11-19 11:01:55.449660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:47.164 [2024-11-19 11:01:55.449667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bc8b00 00:07:47.165 [2024-11-19 11:01:55.449686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc8b00 (9): Bad file descriptor 00:07:47.165 [2024-11-19 11:01:55.449699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:47.165 [2024-11-19 11:01:55.449706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:47.165 [2024-11-19 11:01:55.449715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:47.165 [2024-11-19 11:01:55.449724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:47.165 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.165 11:01:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:48.544 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3925907 00:07:48.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3925907) - No such process 00:07:48.544 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:48.544 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.545 { 00:07:48.545 "params": { 00:07:48.545 "name": "Nvme$subsystem", 00:07:48.545 "trtype": "$TEST_TRANSPORT", 00:07:48.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.545 "adrfam": "ipv4", 00:07:48.545 "trsvcid": "$NVMF_PORT", 00:07:48.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.545 "hdgst": ${hdgst:-false}, 00:07:48.545 "ddgst": ${ddgst:-false} 00:07:48.545 }, 00:07:48.545 "method": "bdev_nvme_attach_controller" 00:07:48.545 } 00:07:48.545 EOF 00:07:48.545 )") 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:48.545 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.545 "params": { 00:07:48.545 "name": "Nvme0", 00:07:48.545 "trtype": "tcp", 00:07:48.545 "traddr": "10.0.0.2", 00:07:48.545 "adrfam": "ipv4", 00:07:48.545 "trsvcid": "4420", 00:07:48.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:48.545 "hdgst": false, 00:07:48.545 "ddgst": false 00:07:48.545 }, 00:07:48.545 "method": "bdev_nvme_attach_controller" 00:07:48.545 }' 00:07:48.545 [2024-11-19 11:01:56.523017] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:07:48.545 [2024-11-19 11:01:56.523087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926367 ] 00:07:48.545 [2024-11-19 11:01:56.602236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.545 [2024-11-19 11:01:56.637856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.805 Running I/O for 1 seconds... 00:07:49.746 1598.00 IOPS, 99.88 MiB/s 00:07:49.746 Latency(us) 00:07:49.746 [2024-11-19T10:01:58.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:49.746 Verification LBA range: start 0x0 length 0x400 00:07:49.746 Nvme0n1 : 1.04 1605.12 100.32 0.00 0.00 39189.27 5870.93 32331.09 00:07:49.746 [2024-11-19T10:01:58.098Z] =================================================================================================================== 00:07:49.746 [2024-11-19T10:01:58.098Z] Total : 1605.12 100.32 0.00 0.00 39189.27 5870.93 32331.09 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.007 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.007 rmmod nvme_tcp 00:07:50.007 rmmod nvme_fabrics 00:07:50.008 rmmod nvme_keyring 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3925836 ']' 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3925836 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3925836 ']' 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3925836 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3925836 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3925836' 00:07:50.008 killing process with pid 3925836 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3925836 00:07:50.008 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3925836 00:07:50.008 [2024-11-19 11:01:58.343589] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.266 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.267 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.267 11:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.175 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.176 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:52.176 00:07:52.176 real 0m15.580s 00:07:52.176 user 0m23.656s 00:07:52.176 sys 0m7.346s 00:07:52.176 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.176 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.176 ************************************ 00:07:52.176 END TEST nvmf_host_management 00:07:52.176 ************************************ 00:07:52.176 11:02:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:52.176 11:02:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.176 11:02:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.176 11:02:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.435 ************************************ 00:07:52.435 START TEST nvmf_lvol 00:07:52.435 ************************************ 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:52.435 * Looking for test storage... 00:07:52.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.435 --rc genhtml_branch_coverage=1 00:07:52.435 --rc genhtml_function_coverage=1 00:07:52.435 --rc genhtml_legend=1 00:07:52.435 --rc geninfo_all_blocks=1 00:07:52.435 --rc geninfo_unexecuted_blocks=1 00:07:52.435 00:07:52.435 ' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.435 --rc genhtml_branch_coverage=1 00:07:52.435 --rc genhtml_function_coverage=1 00:07:52.435 --rc genhtml_legend=1 00:07:52.435 --rc geninfo_all_blocks=1 00:07:52.435 --rc geninfo_unexecuted_blocks=1 00:07:52.435 00:07:52.435 ' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.435 --rc genhtml_branch_coverage=1 00:07:52.435 --rc genhtml_function_coverage=1 00:07:52.435 --rc genhtml_legend=1 00:07:52.435 --rc geninfo_all_blocks=1 00:07:52.435 --rc geninfo_unexecuted_blocks=1 00:07:52.435 00:07:52.435 ' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.435 --rc genhtml_branch_coverage=1 00:07:52.435 --rc genhtml_function_coverage=1 00:07:52.435 --rc genhtml_legend=1 00:07:52.435 --rc geninfo_all_blocks=1 00:07:52.435 --rc geninfo_unexecuted_blocks=1 00:07:52.435 00:07:52.435 ' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:52.435 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:00.567 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:00.567 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.567 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:00.568 Found net devices under 0000:31:00.0: cvl_0_0 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:00.568 Found net devices under 0000:31:00.1: cvl_0_1 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.568 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:08:00.830 00:08:00.830 --- 10.0.0.2 ping statistics --- 00:08:00.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.830 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:08:00.830 00:08:00.830 --- 10.0.0.1 ping statistics --- 00:08:00.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.830 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.830 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.091 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.091 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.091 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.091 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:01.091 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.091 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.091 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3931610 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3931610 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3931610 ']' 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.092 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.092 [2024-11-19 11:02:09.288380] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:08:01.092 [2024-11-19 11:02:09.288430] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.092 [2024-11-19 11:02:09.373976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.092 [2024-11-19 11:02:09.409521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.092 [2024-11-19 11:02:09.409555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.092 [2024-11-19 11:02:09.409563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.092 [2024-11-19 11:02:09.409570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.092 [2024-11-19 11:02:09.409575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.092 [2024-11-19 11:02:09.411156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.092 [2024-11-19 11:02:09.411274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.092 [2024-11-19 11:02:09.411277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.034 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.034 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:02.034 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.034 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.034 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.034 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.034 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:02.034 [2024-11-19 11:02:10.285158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.034 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:02.294 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:02.294 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:02.554 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:02.554 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:02.554 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:02.813 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2562f12f-d3db-4cb0-beed-f707ea5a6009 00:08:02.813 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2562f12f-d3db-4cb0-beed-f707ea5a6009 lvol 20 00:08:03.073 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b065a4e4-fc63-4dca-80a7-3d69fd20a44b 00:08:03.073 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:03.332 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b065a4e4-fc63-4dca-80a7-3d69fd20a44b 00:08:03.332 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:03.591 [2024-11-19 11:02:11.792229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.592 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.852 11:02:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3932165 00:08:03.852 11:02:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:03.852 11:02:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:04.794 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b065a4e4-fc63-4dca-80a7-3d69fd20a44b MY_SNAPSHOT 00:08:05.055 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d36ccb7f-37a0-4594-9b89-fe31ef89e202 00:08:05.055 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b065a4e4-fc63-4dca-80a7-3d69fd20a44b 30 00:08:05.315 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d36ccb7f-37a0-4594-9b89-fe31ef89e202 MY_CLONE 00:08:05.576 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5c39ec02-f7cb-4a29-bd98-3179fe7768e6 00:08:05.576 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5c39ec02-f7cb-4a29-bd98-3179fe7768e6 00:08:05.836 11:02:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3932165 00:08:15.846 Initializing NVMe Controllers 00:08:15.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:15.846 Controller IO queue size 128, less than required. 00:08:15.846 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:15.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:15.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:15.846 Initialization complete. Launching workers. 00:08:15.846 ======================================================== 00:08:15.846 Latency(us) 00:08:15.846 Device Information : IOPS MiB/s Average min max 00:08:15.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12215.60 47.72 10480.76 1515.56 58042.74 00:08:15.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17781.50 69.46 7197.56 1208.28 53510.21 00:08:15.846 ======================================================== 00:08:15.846 Total : 29997.10 117.18 8534.56 1208.28 58042.74 00:08:15.846 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b065a4e4-fc63-4dca-80a7-3d69fd20a44b 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2562f12f-d3db-4cb0-beed-f707ea5a6009 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.846 rmmod nvme_tcp 00:08:15.846 rmmod nvme_fabrics 00:08:15.846 rmmod nvme_keyring 00:08:15.846 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3931610 ']' 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3931610 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3931610 ']' 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3931610 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3931610 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3931610' 00:08:15.846 killing process with pid 3931610 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3931610 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3931610 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.846 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.231 00:08:17.231 real 0m24.770s 00:08:17.231 user 1m4.509s 00:08:17.231 sys 0m9.395s 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:17.231 ************************************ 00:08:17.231 END TEST nvmf_lvol 00:08:17.231 ************************************ 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.231 ************************************ 00:08:17.231 START TEST nvmf_lvs_grow 00:08:17.231 ************************************ 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:17.231 * Looking for test storage... 00:08:17.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.231 --rc genhtml_branch_coverage=1 00:08:17.231 --rc genhtml_function_coverage=1 00:08:17.231 --rc genhtml_legend=1 00:08:17.231 --rc geninfo_all_blocks=1 00:08:17.231 --rc geninfo_unexecuted_blocks=1 00:08:17.231 00:08:17.231 ' 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.231 --rc genhtml_branch_coverage=1 00:08:17.231 --rc genhtml_function_coverage=1 00:08:17.231 --rc genhtml_legend=1 00:08:17.231 --rc geninfo_all_blocks=1 00:08:17.231 --rc geninfo_unexecuted_blocks=1 00:08:17.231 00:08:17.231 ' 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.231 --rc genhtml_branch_coverage=1 00:08:17.231 --rc genhtml_function_coverage=1 00:08:17.231 --rc genhtml_legend=1 00:08:17.231 --rc geninfo_all_blocks=1 00:08:17.231 --rc geninfo_unexecuted_blocks=1 00:08:17.231 00:08:17.231 ' 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.231 --rc genhtml_branch_coverage=1 00:08:17.231 --rc genhtml_function_coverage=1 00:08:17.231 --rc genhtml_legend=1 00:08:17.231 --rc geninfo_all_blocks=1 00:08:17.231 --rc geninfo_unexecuted_blocks=1 00:08:17.231 00:08:17.231 ' 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.231 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.494 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:25.638 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:25.638 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:25.638 Found net devices under 0000:31:00.0: cvl_0_0 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:25.638 Found net devices under 0000:31:00.1: cvl_0_1 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.638 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.639 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:25.639 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:25.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:08:25.639 00:08:25.639 --- 10.0.0.2 ping statistics --- 00:08:25.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.639 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:08:25.639 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:08:25.901 00:08:25.901 --- 10.0.0.1 ping statistics --- 00:08:25.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.901 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:25.901 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3939093 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3939093 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3939093 ']' 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.901 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.901 [2024-11-19 11:02:34.106155] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:08:25.901 [2024-11-19 11:02:34.106223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.901 [2024-11-19 11:02:34.198492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.901 [2024-11-19 11:02:34.238986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.901 [2024-11-19 11:02:34.239024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.901 [2024-11-19 11:02:34.239032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.901 [2024-11-19 11:02:34.239039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.901 [2024-11-19 11:02:34.239045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.901 [2024-11-19 11:02:34.239628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.843 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.843 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:26.843 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.843 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.843 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.843 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.843 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:26.843 [2024-11-19 11:02:35.089119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.843 ************************************ 00:08:26.843 START TEST lvs_grow_clean 00:08:26.843 ************************************ 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.843 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.104 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:27.104 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:27.365 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a112af28-af03-4083-87a8-8194348d3812 00:08:27.365 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:27.365 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:27.626 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:27.626 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:27.626 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a112af28-af03-4083-87a8-8194348d3812 lvol 150 00:08:27.626 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4852d965-d54e-483d-8056-5cfa30029914 00:08:27.626 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.626 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.888 [2024-11-19 11:02:36.049115] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.888 [2024-11-19 11:02:36.049166] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.888 true 00:08:27.888 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:27.888 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:27.888 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:27.888 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.148 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4852d965-d54e-483d-8056-5cfa30029914 00:08:28.409 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.409 [2024-11-19 11:02:36.691086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.409 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3939752 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3939752 /var/tmp/bdevperf.sock 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3939752 ']' 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.670 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:28.670 [2024-11-19 11:02:36.925016] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:08:28.670 [2024-11-19 11:02:36.925069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939752 ] 00:08:28.670 [2024-11-19 11:02:37.020493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.931 [2024-11-19 11:02:37.056274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.504 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.504 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:29.504 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.764 Nvme0n1 00:08:29.764 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:30.025 [ 00:08:30.025 { 00:08:30.025 "name": "Nvme0n1", 00:08:30.025 "aliases": [ 00:08:30.025 "4852d965-d54e-483d-8056-5cfa30029914" 00:08:30.025 ], 00:08:30.025 "product_name": "NVMe disk", 00:08:30.025 "block_size": 4096, 00:08:30.025 "num_blocks": 38912, 00:08:30.025 "uuid": "4852d965-d54e-483d-8056-5cfa30029914", 00:08:30.025 "numa_id": 0, 00:08:30.025 "assigned_rate_limits": { 00:08:30.025 "rw_ios_per_sec": 0, 00:08:30.025 "rw_mbytes_per_sec": 0, 00:08:30.025 "r_mbytes_per_sec": 0, 00:08:30.025 "w_mbytes_per_sec": 0 00:08:30.025 }, 00:08:30.025 "claimed": false, 00:08:30.025 "zoned": false, 00:08:30.025 "supported_io_types": { 00:08:30.025 "read": true, 00:08:30.025 "write": true, 00:08:30.025 "unmap": true, 00:08:30.025 "flush": true, 00:08:30.025 "reset": true, 00:08:30.025 "nvme_admin": true, 00:08:30.025 "nvme_io": true, 00:08:30.025 "nvme_io_md": false, 00:08:30.025 "write_zeroes": true, 00:08:30.025 "zcopy": false, 00:08:30.025 "get_zone_info": false, 00:08:30.025 "zone_management": false, 00:08:30.025 "zone_append": false, 00:08:30.025 "compare": true, 00:08:30.025 "compare_and_write": true, 00:08:30.025 "abort": true, 00:08:30.025 "seek_hole": false, 00:08:30.025 "seek_data": false, 00:08:30.025 "copy": true, 00:08:30.025 "nvme_iov_md": false 00:08:30.025 }, 00:08:30.025 "memory_domains": [ 00:08:30.025 { 00:08:30.025 "dma_device_id": "system", 00:08:30.025 "dma_device_type": 1 00:08:30.025 } 00:08:30.025 ], 00:08:30.025 "driver_specific": { 00:08:30.025 "nvme": [ 00:08:30.025 { 00:08:30.025 "trid": { 00:08:30.025 "trtype": "TCP", 00:08:30.025 "adrfam": "IPv4", 00:08:30.025 "traddr": "10.0.0.2", 00:08:30.025 "trsvcid": "4420", 00:08:30.025 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:30.025 }, 00:08:30.025 "ctrlr_data": { 00:08:30.025 "cntlid": 1, 00:08:30.025 "vendor_id": "0x8086", 00:08:30.025 "model_number": "SPDK bdev Controller", 00:08:30.025 "serial_number": "SPDK0", 00:08:30.025 "firmware_revision": "25.01", 00:08:30.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.025 "oacs": { 00:08:30.025 "security": 0, 00:08:30.025 "format": 0, 00:08:30.025 "firmware": 0, 00:08:30.025 "ns_manage": 0 00:08:30.025 }, 00:08:30.025 "multi_ctrlr": true, 00:08:30.025 "ana_reporting": false 00:08:30.025 }, 00:08:30.025 "vs": { 00:08:30.025 "nvme_version": "1.3" 00:08:30.025 }, 00:08:30.025 "ns_data": { 00:08:30.025 "id": 1, 00:08:30.025 "can_share": true 00:08:30.025 } 00:08:30.025 } 00:08:30.025 ], 00:08:30.025 "mp_policy": "active_passive" 00:08:30.025 } 00:08:30.025 } 00:08:30.025 ] 00:08:30.025 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3940089 00:08:30.025 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:30.025 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:30.025 Running I/O for 10 seconds... 00:08:31.065 Latency(us) 00:08:31.065 [2024-11-19T10:02:39.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.065 Nvme0n1 : 1.00 17654.00 68.96 0.00 0.00 0.00 0.00 0.00 00:08:31.065 [2024-11-19T10:02:39.417Z] =================================================================================================================== 00:08:31.065 [2024-11-19T10:02:39.417Z] Total : 17654.00 68.96 0.00 0.00 0.00 0.00 0.00 00:08:31.065 00:08:32.028 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a112af28-af03-4083-87a8-8194348d3812 00:08:32.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.028 Nvme0n1 : 2.00 17809.00 69.57 0.00 0.00 0.00 0.00 0.00 00:08:32.028 [2024-11-19T10:02:40.380Z] =================================================================================================================== 00:08:32.028 [2024-11-19T10:02:40.380Z] Total : 17809.00 69.57 0.00 0.00 0.00 0.00 0.00 00:08:32.028 00:08:32.289 true 00:08:32.289 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:32.289 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:32.289 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:32.289 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:32.289 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3940089 00:08:33.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.232 Nvme0n1 : 3.00 17880.00 69.84 0.00 0.00 0.00 0.00 0.00 00:08:33.232 [2024-11-19T10:02:41.584Z] =================================================================================================================== 00:08:33.232 [2024-11-19T10:02:41.584Z] Total : 17880.00 69.84 0.00 0.00 0.00 0.00 0.00 00:08:33.232 00:08:34.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.175 Nvme0n1 : 4.00 17947.00 70.11 0.00 0.00 0.00 0.00 0.00 00:08:34.175 [2024-11-19T10:02:42.527Z] =================================================================================================================== 00:08:34.175 [2024-11-19T10:02:42.528Z] Total : 17947.00 70.11 0.00 0.00 0.00 0.00 0.00 00:08:34.176 00:08:35.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.119 Nvme0n1 : 5.00 17975.40 70.22 0.00 0.00 0.00 0.00 0.00 00:08:35.119 [2024-11-19T10:02:43.471Z] =================================================================================================================== 00:08:35.119 [2024-11-19T10:02:43.471Z] Total : 17975.40 70.22 0.00 0.00 0.00 0.00 0.00 00:08:35.119 00:08:36.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.063 Nvme0n1 : 6.00 18005.33 70.33 0.00 0.00 0.00 0.00 0.00 00:08:36.063 [2024-11-19T10:02:44.415Z] =================================================================================================================== 00:08:36.063 [2024-11-19T10:02:44.415Z] Total : 18005.33 70.33 0.00 0.00 0.00 0.00 0.00 00:08:36.063 00:08:37.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.004 Nvme0n1 : 7.00 18037.71 70.46 0.00 0.00 0.00 0.00 0.00 00:08:37.004 [2024-11-19T10:02:45.356Z] =================================================================================================================== 00:08:37.004 [2024-11-19T10:02:45.356Z] Total : 18037.71 70.46 0.00 0.00 0.00 0.00 0.00 00:08:37.004 00:08:38.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.387 Nvme0n1 : 8.00 18053.12 70.52 0.00 0.00 0.00 0.00 0.00 00:08:38.387 [2024-11-19T10:02:46.739Z] =================================================================================================================== 00:08:38.387 [2024-11-19T10:02:46.739Z] Total : 18053.12 70.52 0.00 0.00 0.00 0.00 0.00 00:08:38.387 00:08:39.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.329 Nvme0n1 : 9.00 18070.89 70.59 0.00 0.00 0.00 0.00 0.00 00:08:39.329 [2024-11-19T10:02:47.681Z] =================================================================================================================== 00:08:39.329 [2024-11-19T10:02:47.681Z] Total : 18070.89 70.59 0.00 0.00 0.00 0.00 0.00 00:08:39.329 00:08:40.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.271 Nvme0n1 : 10.00 18087.70 70.66 0.00 0.00 0.00 0.00 0.00 00:08:40.271 [2024-11-19T10:02:48.623Z] =================================================================================================================== 00:08:40.271 [2024-11-19T10:02:48.623Z] Total : 18087.70 70.66 0.00 0.00 0.00 0.00 0.00 00:08:40.271 00:08:40.271 00:08:40.271 Latency(us) 00:08:40.271 [2024-11-19T10:02:48.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.271 Nvme0n1 : 10.00 18082.98 70.64 0.00 0.00 7075.58 2976.43 16602.45 00:08:40.271 [2024-11-19T10:02:48.623Z] =================================================================================================================== 00:08:40.271 [2024-11-19T10:02:48.623Z] Total : 18082.98 70.64 0.00 0.00 7075.58 2976.43 16602.45 00:08:40.271 { 00:08:40.271 "results": [ 00:08:40.271 { 00:08:40.271 "job": "Nvme0n1", 00:08:40.271 "core_mask": "0x2", 00:08:40.271 "workload": "randwrite", 00:08:40.271 "status": "finished", 00:08:40.271 "queue_depth": 128, 00:08:40.271 "io_size": 4096, 00:08:40.271 "runtime": 10.002665, 00:08:40.271 "iops": 18082.98088559399, 00:08:40.271 "mibps": 70.63664408435152, 00:08:40.271 "io_failed": 0, 00:08:40.271 "io_timeout": 0, 00:08:40.271 "avg_latency_us": 7075.5806004046935, 00:08:40.271 "min_latency_us": 2976.4266666666667, 00:08:40.271 "max_latency_us": 16602.453333333335 00:08:40.271 } 00:08:40.271 ], 00:08:40.271 "core_count": 1 00:08:40.271 } 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3939752 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3939752 ']' 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3939752 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3939752 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3939752' 00:08:40.272 killing process with pid 3939752 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3939752 00:08:40.272 Received shutdown signal, test time was about 10.000000 seconds 00:08:40.272 00:08:40.272 Latency(us) 00:08:40.272 [2024-11-19T10:02:48.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.272 [2024-11-19T10:02:48.624Z] =================================================================================================================== 00:08:40.272 [2024-11-19T10:02:48.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3939752 00:08:40.272 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.532 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.793 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.793 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:40.793 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.793 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:40.793 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.065 [2024-11-19 11:02:49.223502] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:41.065 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:41.327 request: 00:08:41.327 { 00:08:41.327 "uuid": "a112af28-af03-4083-87a8-8194348d3812", 00:08:41.327 "method": "bdev_lvol_get_lvstores", 00:08:41.327 "req_id": 1 00:08:41.327 } 00:08:41.327 Got JSON-RPC error response 00:08:41.327 response: 00:08:41.327 { 00:08:41.327 "code": -19, 00:08:41.327 "message": "No such device" 00:08:41.327 } 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.327 aio_bdev 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4852d965-d54e-483d-8056-5cfa30029914 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4852d965-d54e-483d-8056-5cfa30029914 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.327 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.589 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4852d965-d54e-483d-8056-5cfa30029914 -t 2000 00:08:41.849 [ 00:08:41.849 { 00:08:41.849 "name": "4852d965-d54e-483d-8056-5cfa30029914", 00:08:41.849 "aliases": [ 00:08:41.849 "lvs/lvol" 00:08:41.849 ], 00:08:41.849 "product_name": "Logical Volume", 00:08:41.849 "block_size": 4096, 00:08:41.849 "num_blocks": 38912, 00:08:41.849 "uuid": "4852d965-d54e-483d-8056-5cfa30029914", 00:08:41.849 "assigned_rate_limits": { 00:08:41.849 "rw_ios_per_sec": 0, 00:08:41.849 "rw_mbytes_per_sec": 0, 00:08:41.849 "r_mbytes_per_sec": 0, 00:08:41.849 "w_mbytes_per_sec": 0 00:08:41.849 }, 00:08:41.849 "claimed": false, 00:08:41.849 "zoned": false, 00:08:41.849 "supported_io_types": { 00:08:41.849 "read": true, 00:08:41.849 "write": true, 00:08:41.849 "unmap": true, 00:08:41.849 "flush": false, 00:08:41.849 "reset": true, 00:08:41.849 "nvme_admin": false, 00:08:41.849 "nvme_io": false, 00:08:41.849 "nvme_io_md": false, 00:08:41.849 "write_zeroes": true, 00:08:41.849 "zcopy": false, 00:08:41.849 "get_zone_info": false, 00:08:41.849 "zone_management": false, 00:08:41.849 "zone_append": false, 00:08:41.849 "compare": false, 00:08:41.849 "compare_and_write": false, 00:08:41.849 "abort": false, 00:08:41.849 "seek_hole": true, 00:08:41.849 "seek_data": true, 00:08:41.850 "copy": false, 00:08:41.850 "nvme_iov_md": false 00:08:41.850 }, 00:08:41.850 "driver_specific": { 00:08:41.850 "lvol": { 00:08:41.850 "lvol_store_uuid": "a112af28-af03-4083-87a8-8194348d3812", 00:08:41.850 "base_bdev": "aio_bdev", 00:08:41.850 "thin_provision": false, 00:08:41.850 "num_allocated_clusters": 38, 00:08:41.850 "snapshot": false, 00:08:41.850 "clone": false, 00:08:41.850 "esnap_clone": false 00:08:41.850 } 00:08:41.850 } 00:08:41.850 } 00:08:41.850 ] 00:08:41.850 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:41.850 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:41.850 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.850 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.850 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a112af28-af03-4083-87a8-8194348d3812 00:08:41.850 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:42.112 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:42.112 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4852d965-d54e-483d-8056-5cfa30029914 00:08:42.112 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a112af28-af03-4083-87a8-8194348d3812 00:08:42.373 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.634 00:08:42.634 real 0m15.639s 00:08:42.634 user 0m15.409s 00:08:42.634 sys 0m1.334s 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:42.634 ************************************ 00:08:42.634 END TEST lvs_grow_clean 00:08:42.634 ************************************ 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.634 ************************************ 00:08:42.634 START TEST lvs_grow_dirty 00:08:42.634 ************************************ 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.634 11:02:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.896 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:42.896 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:43.157 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:43.157 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:43.157 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:43.157 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:43.157 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:43.157 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e lvol 150 00:08:43.418 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ea738d94-eb35-49ee-a157-b51272e40216 00:08:43.418 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.418 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:43.679 [2024-11-19 11:02:51.772645] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:43.679 [2024-11-19 11:02:51.772695] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:43.679 true 00:08:43.679 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:43.679 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:43.679 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:43.679 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:43.939 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea738d94-eb35-49ee-a157-b51272e40216 00:08:43.939 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:44.201 [2024-11-19 11:02:52.422678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.201 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3942863 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3942863 /var/tmp/bdevperf.sock 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3942863 ']' 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:44.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.463 11:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.463 [2024-11-19 11:02:52.658819] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:08:44.463 [2024-11-19 11:02:52.658880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3942863 ] 00:08:44.463 [2024-11-19 11:02:52.748069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.463 [2024-11-19 11:02:52.778025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.409 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.409 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:45.409 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:45.409 Nvme0n1 00:08:45.409 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:45.670 [ 00:08:45.671 { 00:08:45.671 "name": "Nvme0n1", 00:08:45.671 "aliases": [ 00:08:45.671 "ea738d94-eb35-49ee-a157-b51272e40216" 00:08:45.671 ], 00:08:45.671 "product_name": "NVMe disk", 00:08:45.671 "block_size": 4096, 00:08:45.671 "num_blocks": 38912, 00:08:45.671 "uuid": "ea738d94-eb35-49ee-a157-b51272e40216", 00:08:45.671 "numa_id": 0, 00:08:45.671 "assigned_rate_limits": { 00:08:45.671 "rw_ios_per_sec": 0, 00:08:45.671 "rw_mbytes_per_sec": 0, 00:08:45.671 "r_mbytes_per_sec": 0, 00:08:45.671 "w_mbytes_per_sec": 0 00:08:45.671 }, 00:08:45.671 "claimed": false, 00:08:45.671 "zoned": false, 00:08:45.671 "supported_io_types": { 00:08:45.671 "read": true, 00:08:45.671 "write": true, 00:08:45.671 "unmap": true, 00:08:45.671 "flush": true, 00:08:45.671 "reset": true, 00:08:45.671 "nvme_admin": true, 00:08:45.671 "nvme_io": true, 00:08:45.671 "nvme_io_md": false, 00:08:45.671 "write_zeroes": true, 00:08:45.671 "zcopy": false, 00:08:45.671 "get_zone_info": false, 00:08:45.671 "zone_management": false, 00:08:45.671 "zone_append": false, 00:08:45.671 "compare": true, 00:08:45.671 "compare_and_write": true, 00:08:45.671 "abort": true, 00:08:45.671 "seek_hole": false, 00:08:45.671 "seek_data": false, 00:08:45.671 "copy": true, 00:08:45.671 "nvme_iov_md": false 00:08:45.671 }, 00:08:45.671 "memory_domains": [ 00:08:45.671 { 00:08:45.671 "dma_device_id": "system", 00:08:45.671 "dma_device_type": 1 00:08:45.671 } 00:08:45.671 ], 00:08:45.671 "driver_specific": { 00:08:45.671 "nvme": [ 00:08:45.671 { 00:08:45.671 "trid": { 00:08:45.671 "trtype": "TCP", 00:08:45.671 "adrfam": "IPv4", 00:08:45.671 "traddr": "10.0.0.2", 00:08:45.671 "trsvcid": "4420", 00:08:45.671 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:45.671 }, 00:08:45.671 "ctrlr_data": { 00:08:45.671 "cntlid": 1, 00:08:45.671 "vendor_id": "0x8086", 00:08:45.671 "model_number": "SPDK bdev Controller", 00:08:45.671 "serial_number": "SPDK0", 00:08:45.671 "firmware_revision": "25.01", 00:08:45.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.671 "oacs": { 00:08:45.671 "security": 0, 00:08:45.671 "format": 0, 00:08:45.671 "firmware": 0, 00:08:45.671 "ns_manage": 0 00:08:45.671 }, 00:08:45.671 "multi_ctrlr": true, 00:08:45.671 "ana_reporting": false 00:08:45.671 }, 00:08:45.671 "vs": { 00:08:45.671 "nvme_version": "1.3" 00:08:45.671 }, 00:08:45.671 "ns_data": { 00:08:45.671 "id": 1, 00:08:45.671 "can_share": true 00:08:45.671 } 00:08:45.671 } 00:08:45.671 ], 00:08:45.671 "mp_policy": "active_passive" 00:08:45.671 } 00:08:45.671 } 00:08:45.671 ] 00:08:45.671 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3943198 00:08:45.671 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:45.671 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:45.671 Running I/O for 10 seconds... 00:08:47.058 Latency(us) 00:08:47.058 [2024-11-19T10:02:55.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.058 Nvme0n1 : 1.00 17689.00 69.10 0.00 0.00 0.00 0.00 0.00 00:08:47.058 [2024-11-19T10:02:55.410Z] =================================================================================================================== 00:08:47.058 [2024-11-19T10:02:55.410Z] Total : 17689.00 69.10 0.00 0.00 0.00 0.00 0.00 00:08:47.058 00:08:47.631 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:47.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.631 Nvme0n1 : 2.00 17872.50 69.81 0.00 0.00 0.00 0.00 0.00 00:08:47.631 [2024-11-19T10:02:55.983Z] =================================================================================================================== 00:08:47.631 [2024-11-19T10:02:55.983Z] Total : 17872.50 69.81 0.00 0.00 0.00 0.00 0.00 00:08:47.631 00:08:47.892 true 00:08:47.893 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:47.893 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:48.154 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:48.154 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:48.154 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3943198 00:08:48.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.725 Nvme0n1 : 3.00 17942.33 70.09 0.00 0.00 0.00 0.00 0.00 00:08:48.725 [2024-11-19T10:02:57.077Z] =================================================================================================================== 00:08:48.725 [2024-11-19T10:02:57.077Z] Total : 17942.33 70.09 0.00 0.00 0.00 0.00 0.00 00:08:48.725 00:08:49.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.666 Nvme0n1 : 4.00 17987.75 70.26 0.00 0.00 0.00 0.00 0.00 00:08:49.666 [2024-11-19T10:02:58.018Z] =================================================================================================================== 00:08:49.666 [2024-11-19T10:02:58.018Z] Total : 17987.75 70.26 0.00 0.00 0.00 0.00 0.00 00:08:49.666 00:08:51.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.049 Nvme0n1 : 5.00 18029.00 70.43 0.00 0.00 0.00 0.00 0.00 00:08:51.049 [2024-11-19T10:02:59.401Z] =================================================================================================================== 00:08:51.049 [2024-11-19T10:02:59.401Z] Total : 18029.00 70.43 0.00 0.00 0.00 0.00 0.00 00:08:51.049 00:08:51.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.991 Nvme0n1 : 6.00 18053.33 70.52 0.00 0.00 0.00 0.00 0.00 00:08:51.991 [2024-11-19T10:03:00.343Z] =================================================================================================================== 00:08:51.991 [2024-11-19T10:03:00.343Z] Total : 18053.33 70.52 0.00 0.00 0.00 0.00 0.00 00:08:51.991 00:08:52.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.933 Nvme0n1 : 7.00 18058.00 70.54 0.00 0.00 0.00 0.00 0.00 00:08:52.933 [2024-11-19T10:03:01.285Z] =================================================================================================================== 00:08:52.933 [2024-11-19T10:03:01.285Z] Total : 18058.00 70.54 0.00 0.00 0.00 0.00 0.00 00:08:52.933 00:08:53.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.877 Nvme0n1 : 8.00 18069.25 70.58 0.00 0.00 0.00 0.00 0.00 00:08:53.877 [2024-11-19T10:03:02.229Z] =================================================================================================================== 00:08:53.877 [2024-11-19T10:03:02.229Z] Total : 18069.25 70.58 0.00 0.00 0.00 0.00 0.00 00:08:53.877 00:08:54.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.818 Nvme0n1 : 9.00 18086.56 70.65 0.00 0.00 0.00 0.00 0.00 00:08:54.818 [2024-11-19T10:03:03.170Z] =================================================================================================================== 00:08:54.818 [2024-11-19T10:03:03.170Z] Total : 18086.56 70.65 0.00 0.00 0.00 0.00 0.00 00:08:54.818 00:08:55.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.758 Nvme0n1 : 10.00 18098.40 70.70 0.00 0.00 0.00 0.00 0.00 00:08:55.758 [2024-11-19T10:03:04.110Z] =================================================================================================================== 00:08:55.758 [2024-11-19T10:03:04.110Z] Total : 18098.40 70.70 0.00 0.00 0.00 0.00 0.00 00:08:55.758 00:08:55.758 00:08:55.758 Latency(us) 00:08:55.758 [2024-11-19T10:03:04.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.758 Nvme0n1 : 10.01 18101.46 70.71 0.00 0.00 7068.28 4259.84 15291.73 00:08:55.758 [2024-11-19T10:03:04.110Z] =================================================================================================================== 00:08:55.758 [2024-11-19T10:03:04.110Z] Total : 18101.46 70.71 0.00 0.00 7068.28 4259.84 15291.73 00:08:55.758 { 00:08:55.758 "results": [ 00:08:55.758 { 00:08:55.758 "job": "Nvme0n1", 00:08:55.758 "core_mask": "0x2", 00:08:55.758 "workload": "randwrite", 00:08:55.758 "status": "finished", 00:08:55.758 "queue_depth": 128, 00:08:55.758 "io_size": 4096, 00:08:55.758 "runtime": 10.005382, 00:08:55.758 "iops": 18101.457795414506, 00:08:55.758 "mibps": 70.70881951333791, 00:08:55.758 "io_failed": 0, 00:08:55.758 "io_timeout": 0, 00:08:55.758 "avg_latency_us": 7068.280730892118, 00:08:55.758 "min_latency_us": 4259.84, 00:08:55.758 "max_latency_us": 15291.733333333334 00:08:55.758 } 00:08:55.758 ], 00:08:55.758 "core_count": 1 00:08:55.758 } 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3942863 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3942863 ']' 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3942863 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3942863 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3942863' 00:08:55.758 killing process with pid 3942863 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3942863 00:08:55.758 Received shutdown signal, test time was about 10.000000 seconds 00:08:55.758 00:08:55.758 Latency(us) 00:08:55.758 [2024-11-19T10:03:04.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.758 [2024-11-19T10:03:04.110Z] =================================================================================================================== 00:08:55.758 [2024-11-19T10:03:04.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:55.758 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3942863 00:08:56.020 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.280 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.280 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:56.280 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3939093 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3939093 00:08:56.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3939093 Killed "${NVMF_APP[@]}" "$@" 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3945484 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3945484 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3945484 ']' 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.540 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.540 [2024-11-19 11:03:04.851932] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:08:56.540 [2024-11-19 11:03:04.851992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.799 [2024-11-19 11:03:04.939718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.799 [2024-11-19 11:03:04.975576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.800 [2024-11-19 11:03:04.975611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.800 [2024-11-19 11:03:04.975618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.800 [2024-11-19 11:03:04.975625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.800 [2024-11-19 11:03:04.975631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.800 [2024-11-19 11:03:04.976232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.370 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.370 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:57.370 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.370 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.370 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.370 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.370 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.629 [2024-11-19 11:03:05.819104] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:57.629 [2024-11-19 11:03:05.819192] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:57.629 [2024-11-19 11:03:05.819223] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:57.629 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:57.629 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ea738d94-eb35-49ee-a157-b51272e40216 00:08:57.629 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ea738d94-eb35-49ee-a157-b51272e40216 00:08:57.629 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.629 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:57.629 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.629 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.629 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:57.888 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ea738d94-eb35-49ee-a157-b51272e40216 -t 2000 00:08:57.888 [ 00:08:57.888 { 00:08:57.888 "name": "ea738d94-eb35-49ee-a157-b51272e40216", 00:08:57.888 "aliases": [ 00:08:57.888 "lvs/lvol" 00:08:57.888 ], 00:08:57.888 "product_name": "Logical Volume", 00:08:57.888 "block_size": 4096, 00:08:57.888 "num_blocks": 38912, 00:08:57.888 "uuid": "ea738d94-eb35-49ee-a157-b51272e40216", 00:08:57.888 "assigned_rate_limits": { 00:08:57.888 "rw_ios_per_sec": 0, 00:08:57.888 "rw_mbytes_per_sec": 0, 00:08:57.888 "r_mbytes_per_sec": 0, 00:08:57.888 "w_mbytes_per_sec": 0 00:08:57.888 }, 00:08:57.888 "claimed": false, 00:08:57.888 "zoned": false, 00:08:57.888 "supported_io_types": { 00:08:57.888 "read": true, 00:08:57.888 "write": true, 00:08:57.888 "unmap": true, 00:08:57.888 "flush": false, 00:08:57.888 "reset": true, 00:08:57.888 "nvme_admin": false, 00:08:57.888 "nvme_io": false, 00:08:57.888 "nvme_io_md": false, 00:08:57.888 "write_zeroes": true, 00:08:57.888 "zcopy": false, 00:08:57.888 "get_zone_info": false, 00:08:57.888 "zone_management": false, 00:08:57.888 "zone_append": false, 00:08:57.888 "compare": false, 00:08:57.888 "compare_and_write": false, 00:08:57.888 "abort": false, 00:08:57.888 "seek_hole": true, 00:08:57.888 "seek_data": true, 00:08:57.888 "copy": false, 00:08:57.888 "nvme_iov_md": false 00:08:57.888 }, 00:08:57.888 "driver_specific": { 00:08:57.888 "lvol": { 00:08:57.888 "lvol_store_uuid": "6458d5bf-99d4-4111-b8c9-209cef7fde1e", 00:08:57.888 "base_bdev": "aio_bdev", 00:08:57.888 "thin_provision": false, 00:08:57.888 "num_allocated_clusters": 38, 00:08:57.888 "snapshot": false, 00:08:57.888 "clone": false, 00:08:57.888 "esnap_clone": false 00:08:57.888 } 00:08:57.888 } 00:08:57.888 } 00:08:57.888 ] 00:08:57.888 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:57.888 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:57.888 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:58.147 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:58.147 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:58.147 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.408 [2024-11-19 11:03:06.691361] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:58.408 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:58.667 request: 00:08:58.667 { 00:08:58.667 "uuid": "6458d5bf-99d4-4111-b8c9-209cef7fde1e", 00:08:58.667 "method": "bdev_lvol_get_lvstores", 00:08:58.667 "req_id": 1 00:08:58.667 } 00:08:58.667 Got JSON-RPC error response 00:08:58.667 response: 00:08:58.667 { 00:08:58.667 "code": -19, 00:08:58.667 "message": "No such device" 00:08:58.667 } 00:08:58.667 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:58.667 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.667 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.667 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.667 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.926 aio_bdev 00:08:58.926 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ea738d94-eb35-49ee-a157-b51272e40216 00:08:58.927 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ea738d94-eb35-49ee-a157-b51272e40216 00:08:58.927 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.927 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:58.927 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.927 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.927 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:58.927 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ea738d94-eb35-49ee-a157-b51272e40216 -t 2000 00:08:59.186 [ 00:08:59.186 { 00:08:59.186 "name": "ea738d94-eb35-49ee-a157-b51272e40216", 00:08:59.186 "aliases": [ 00:08:59.186 "lvs/lvol" 00:08:59.186 ], 00:08:59.186 "product_name": "Logical Volume", 00:08:59.186 "block_size": 4096, 00:08:59.186 "num_blocks": 38912, 00:08:59.186 "uuid": "ea738d94-eb35-49ee-a157-b51272e40216", 00:08:59.186 "assigned_rate_limits": { 00:08:59.186 "rw_ios_per_sec": 0, 00:08:59.186 "rw_mbytes_per_sec": 0, 00:08:59.186 "r_mbytes_per_sec": 0, 00:08:59.186 "w_mbytes_per_sec": 0 00:08:59.186 }, 00:08:59.186 "claimed": false, 00:08:59.186 "zoned": false, 00:08:59.186 "supported_io_types": { 00:08:59.186 "read": true, 00:08:59.186 "write": true, 00:08:59.186 "unmap": true, 00:08:59.186 "flush": false, 00:08:59.186 "reset": true, 00:08:59.186 "nvme_admin": false, 00:08:59.186 "nvme_io": false, 00:08:59.186 "nvme_io_md": false, 00:08:59.186 "write_zeroes": true, 00:08:59.186 "zcopy": false, 00:08:59.186 "get_zone_info": false, 00:08:59.186 "zone_management": false, 00:08:59.186 "zone_append": false, 00:08:59.186 "compare": false, 00:08:59.186 "compare_and_write": false, 00:08:59.186 "abort": false, 00:08:59.186 "seek_hole": true, 00:08:59.186 "seek_data": true, 00:08:59.186 "copy": false, 00:08:59.186 "nvme_iov_md": false 00:08:59.186 }, 00:08:59.186 "driver_specific": { 00:08:59.186 "lvol": { 00:08:59.186 "lvol_store_uuid": "6458d5bf-99d4-4111-b8c9-209cef7fde1e", 00:08:59.186 "base_bdev": "aio_bdev", 00:08:59.186 "thin_provision": false, 00:08:59.187 "num_allocated_clusters": 38, 00:08:59.187 "snapshot": false, 00:08:59.187 "clone": false, 00:08:59.187 "esnap_clone": false 00:08:59.187 } 00:08:59.187 } 00:08:59.187 } 00:08:59.187 ] 00:08:59.187 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:59.187 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:59.187 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:59.447 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:59.447 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:59.447 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:59.447 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:59.447 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ea738d94-eb35-49ee-a157-b51272e40216 00:08:59.707 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6458d5bf-99d4-4111-b8c9-209cef7fde1e 00:08:59.967 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.967 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:00.228 00:09:00.228 real 0m17.452s 00:09:00.228 user 0m45.472s 00:09:00.228 sys 0m2.948s 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:00.228 ************************************ 00:09:00.228 END TEST lvs_grow_dirty 00:09:00.228 ************************************ 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:00.228 nvmf_trace.0 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.228 rmmod nvme_tcp 00:09:00.228 rmmod nvme_fabrics 00:09:00.228 rmmod nvme_keyring 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3945484 ']' 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3945484 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3945484 ']' 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3945484 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.228 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3945484 00:09:00.489 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.489 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.489 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3945484' 00:09:00.490 killing process with pid 3945484 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3945484 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3945484 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.490 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:03.034 00:09:03.034 real 0m45.417s 00:09:03.034 user 1m7.525s 00:09:03.034 sys 0m11.110s 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:03.034 ************************************ 00:09:03.034 END TEST nvmf_lvs_grow 00:09:03.034 ************************************ 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.034 ************************************ 00:09:03.034 START TEST nvmf_bdev_io_wait 00:09:03.034 ************************************ 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:03.034 * Looking for test storage... 00:09:03.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.034 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.034 --rc genhtml_branch_coverage=1 00:09:03.034 --rc genhtml_function_coverage=1 00:09:03.034 --rc genhtml_legend=1 00:09:03.034 --rc geninfo_all_blocks=1 00:09:03.034 --rc geninfo_unexecuted_blocks=1 00:09:03.034 00:09:03.034 ' 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.034 --rc genhtml_branch_coverage=1 00:09:03.034 --rc genhtml_function_coverage=1 00:09:03.034 --rc genhtml_legend=1 00:09:03.034 --rc geninfo_all_blocks=1 00:09:03.034 --rc geninfo_unexecuted_blocks=1 00:09:03.034 00:09:03.034 ' 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.034 --rc genhtml_branch_coverage=1 00:09:03.034 --rc genhtml_function_coverage=1 00:09:03.034 --rc genhtml_legend=1 00:09:03.034 --rc geninfo_all_blocks=1 00:09:03.034 --rc geninfo_unexecuted_blocks=1 00:09:03.034 00:09:03.034 ' 00:09:03.034 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.035 --rc genhtml_branch_coverage=1 00:09:03.035 --rc genhtml_function_coverage=1 00:09:03.035 --rc genhtml_legend=1 00:09:03.035 --rc geninfo_all_blocks=1 00:09:03.035 --rc geninfo_unexecuted_blocks=1 00:09:03.035 00:09:03.035 ' 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:03.035 11:03:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:11.180 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:11.180 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:11.180 Found net devices under 0000:31:00.0: cvl_0_0 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:11.180 Found net devices under 0000:31:00.1: cvl_0_1 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.180 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:11.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:09:11.181 00:09:11.181 --- 10.0.0.2 ping statistics --- 00:09:11.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.181 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:09:11.181 00:09:11.181 --- 10.0.0.1 ping statistics --- 00:09:11.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.181 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3951532 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3951532 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3951532 ']' 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.181 11:03:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.181 [2024-11-19 11:03:19.516995] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:09:11.181 [2024-11-19 11:03:19.517061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.442 [2024-11-19 11:03:19.609843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.442 [2024-11-19 11:03:19.652685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.442 [2024-11-19 11:03:19.652721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.442 [2024-11-19 11:03:19.652729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.442 [2024-11-19 11:03:19.652736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.442 [2024-11-19 11:03:19.652742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.442 [2024-11-19 11:03:19.654535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.442 [2024-11-19 11:03:19.654654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.442 [2024-11-19 11:03:19.654813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.443 [2024-11-19 11:03:19.654813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.014 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.014 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:12.014 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.014 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.014 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.275 [2024-11-19 11:03:20.431209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.275 Malloc0 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.275 [2024-11-19 11:03:20.490411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3951752 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3951755 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.275 { 00:09:12.275 "params": { 00:09:12.275 "name": "Nvme$subsystem", 00:09:12.275 "trtype": "$TEST_TRANSPORT", 00:09:12.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.275 "adrfam": "ipv4", 00:09:12.275 "trsvcid": "$NVMF_PORT", 00:09:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.275 "hdgst": ${hdgst:-false}, 00:09:12.275 "ddgst": ${ddgst:-false} 00:09:12.275 }, 00:09:12.275 "method": "bdev_nvme_attach_controller" 00:09:12.275 } 00:09:12.275 EOF 00:09:12.275 )") 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3951758 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.275 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3951762 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.276 { 00:09:12.276 "params": { 00:09:12.276 "name": "Nvme$subsystem", 00:09:12.276 "trtype": "$TEST_TRANSPORT", 00:09:12.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.276 "adrfam": "ipv4", 00:09:12.276 "trsvcid": "$NVMF_PORT", 00:09:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.276 "hdgst": ${hdgst:-false}, 00:09:12.276 "ddgst": ${ddgst:-false} 00:09:12.276 }, 00:09:12.276 "method": "bdev_nvme_attach_controller" 00:09:12.276 } 00:09:12.276 EOF 00:09:12.276 )") 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.276 { 00:09:12.276 "params": { 00:09:12.276 "name": "Nvme$subsystem", 00:09:12.276 "trtype": "$TEST_TRANSPORT", 00:09:12.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.276 "adrfam": "ipv4", 00:09:12.276 "trsvcid": "$NVMF_PORT", 00:09:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.276 "hdgst": ${hdgst:-false}, 00:09:12.276 "ddgst": ${ddgst:-false} 00:09:12.276 }, 00:09:12.276 "method": "bdev_nvme_attach_controller" 00:09:12.276 } 00:09:12.276 EOF 00:09:12.276 )") 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:12.276 { 00:09:12.276 "params": { 00:09:12.276 "name": "Nvme$subsystem", 00:09:12.276 "trtype": "$TEST_TRANSPORT", 00:09:12.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.276 "adrfam": "ipv4", 00:09:12.276 "trsvcid": "$NVMF_PORT", 00:09:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.276 "hdgst": ${hdgst:-false}, 00:09:12.276 "ddgst": ${ddgst:-false} 00:09:12.276 }, 00:09:12.276 "method": "bdev_nvme_attach_controller" 00:09:12.276 } 00:09:12.276 EOF 00:09:12.276 )") 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3951752 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.276 "params": { 00:09:12.276 "name": "Nvme1", 00:09:12.276 "trtype": "tcp", 00:09:12.276 "traddr": "10.0.0.2", 00:09:12.276 "adrfam": "ipv4", 00:09:12.276 "trsvcid": "4420", 00:09:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.276 "hdgst": false, 00:09:12.276 "ddgst": false 00:09:12.276 }, 00:09:12.276 "method": "bdev_nvme_attach_controller" 00:09:12.276 }' 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.276 "params": { 00:09:12.276 "name": "Nvme1", 00:09:12.276 "trtype": "tcp", 00:09:12.276 "traddr": "10.0.0.2", 00:09:12.276 "adrfam": "ipv4", 00:09:12.276 "trsvcid": "4420", 00:09:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.276 "hdgst": false, 00:09:12.276 "ddgst": false 00:09:12.276 }, 00:09:12.276 "method": "bdev_nvme_attach_controller" 00:09:12.276 }' 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.276 "params": { 00:09:12.276 "name": "Nvme1", 00:09:12.276 "trtype": "tcp", 00:09:12.276 "traddr": "10.0.0.2", 00:09:12.276 "adrfam": "ipv4", 00:09:12.276 "trsvcid": "4420", 00:09:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.276 "hdgst": false, 00:09:12.276 "ddgst": false 00:09:12.276 }, 00:09:12.276 "method": "bdev_nvme_attach_controller" 00:09:12.276 }' 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:12.276 11:03:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:12.276 "params": { 00:09:12.276 "name": "Nvme1", 00:09:12.276 "trtype": "tcp", 00:09:12.276 "traddr": "10.0.0.2", 00:09:12.276 "adrfam": "ipv4", 00:09:12.276 "trsvcid": "4420", 00:09:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.276 "hdgst": false, 00:09:12.276 "ddgst": false 00:09:12.276 }, 00:09:12.276 "method": "bdev_nvme_attach_controller" 00:09:12.276 }' 00:09:12.276 [2024-11-19 11:03:20.544044] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:09:12.276 [2024-11-19 11:03:20.544098] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:12.276 [2024-11-19 11:03:20.548793] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:09:12.276 [2024-11-19 11:03:20.548841] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:12.276 [2024-11-19 11:03:20.548991] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:09:12.276 [2024-11-19 11:03:20.549036] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:12.276 [2024-11-19 11:03:20.549661] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:09:12.276 [2024-11-19 11:03:20.549703] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:12.537 [2024-11-19 11:03:20.711020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.537 [2024-11-19 11:03:20.740978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:12.537 [2024-11-19 11:03:20.754241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.537 [2024-11-19 11:03:20.783106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:12.537 [2024-11-19 11:03:20.804581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.537 [2024-11-19 11:03:20.832877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:12.537 [2024-11-19 11:03:20.849847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.537 [2024-11-19 11:03:20.878755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:12.798 Running I/O for 1 seconds... 00:09:12.798 Running I/O for 1 seconds... 00:09:12.798 Running I/O for 1 seconds... 00:09:13.059 Running I/O for 1 seconds... 00:09:13.632 20597.00 IOPS, 80.46 MiB/s 00:09:13.632 Latency(us) 00:09:13.632 [2024-11-19T10:03:21.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.632 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:13.632 Nvme1n1 : 1.01 20659.44 80.70 0.00 0.00 6179.47 3044.69 15510.19 00:09:13.632 [2024-11-19T10:03:21.984Z] =================================================================================================================== 00:09:13.632 [2024-11-19T10:03:21.984Z] Total : 20659.44 80.70 0.00 0.00 6179.47 3044.69 15510.19 00:09:13.632 8142.00 IOPS, 31.80 MiB/s 00:09:13.632 Latency(us) 00:09:13.632 [2024-11-19T10:03:21.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.632 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:13.632 Nvme1n1 : 1.02 8158.34 31.87 0.00 0.00 15566.93 6635.52 29054.29 00:09:13.632 [2024-11-19T10:03:21.984Z] =================================================================================================================== 00:09:13.632 [2024-11-19T10:03:21.984Z] Total : 8158.34 31.87 0.00 0.00 15566.93 6635.52 29054.29 00:09:13.892 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3951755 00:09:13.892 187960.00 IOPS, 734.22 MiB/s 00:09:13.892 Latency(us) 00:09:13.892 [2024-11-19T10:03:22.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.892 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:13.892 Nvme1n1 : 1.00 187588.13 732.77 0.00 0.00 678.42 298.67 1966.08 00:09:13.892 [2024-11-19T10:03:22.244Z] =================================================================================================================== 00:09:13.892 [2024-11-19T10:03:22.244Z] Total : 187588.13 732.77 0.00 0.00 678.42 298.67 1966.08 00:09:13.892 8051.00 IOPS, 31.45 MiB/s 00:09:13.892 Latency(us) 00:09:13.892 [2024-11-19T10:03:22.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.893 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:13.893 Nvme1n1 : 1.01 8129.23 31.75 0.00 0.00 15693.18 4805.97 40632.32 00:09:13.893 [2024-11-19T10:03:22.245Z] =================================================================================================================== 00:09:13.893 [2024-11-19T10:03:22.245Z] Total : 8129.23 31.75 0.00 0.00 15693.18 4805.97 40632.32 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3951758 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3951762 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.154 rmmod nvme_tcp 00:09:14.154 rmmod nvme_fabrics 00:09:14.154 rmmod nvme_keyring 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3951532 ']' 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3951532 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3951532 ']' 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3951532 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.154 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951532 00:09:14.155 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.155 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.155 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951532' 00:09:14.155 killing process with pid 3951532 00:09:14.155 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3951532 00:09:14.155 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3951532 00:09:14.415 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.415 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.415 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.415 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:14.415 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:14.415 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.416 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.416 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.416 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.416 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.416 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.416 11:03:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.330 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.330 00:09:16.330 real 0m13.730s 00:09:16.330 user 0m18.921s 00:09:16.330 sys 0m7.721s 00:09:16.330 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.330 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.330 ************************************ 00:09:16.330 END TEST nvmf_bdev_io_wait 00:09:16.330 ************************************ 00:09:16.330 11:03:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.330 11:03:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.330 11:03:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.330 11:03:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.592 ************************************ 00:09:16.592 START TEST nvmf_queue_depth 00:09:16.592 ************************************ 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.592 * Looking for test storage... 00:09:16.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.592 --rc genhtml_branch_coverage=1 00:09:16.592 --rc genhtml_function_coverage=1 00:09:16.592 --rc genhtml_legend=1 00:09:16.592 --rc geninfo_all_blocks=1 00:09:16.592 --rc geninfo_unexecuted_blocks=1 00:09:16.592 00:09:16.592 ' 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.592 --rc genhtml_branch_coverage=1 00:09:16.592 --rc genhtml_function_coverage=1 00:09:16.592 --rc genhtml_legend=1 00:09:16.592 --rc geninfo_all_blocks=1 00:09:16.592 --rc geninfo_unexecuted_blocks=1 00:09:16.592 00:09:16.592 ' 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.592 --rc genhtml_branch_coverage=1 00:09:16.592 --rc genhtml_function_coverage=1 00:09:16.592 --rc genhtml_legend=1 00:09:16.592 --rc geninfo_all_blocks=1 00:09:16.592 --rc geninfo_unexecuted_blocks=1 00:09:16.592 00:09:16.592 ' 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.592 --rc genhtml_branch_coverage=1 00:09:16.592 --rc genhtml_function_coverage=1 00:09:16.592 --rc genhtml_legend=1 00:09:16.592 --rc geninfo_all_blocks=1 00:09:16.592 --rc geninfo_unexecuted_blocks=1 00:09:16.592 00:09:16.592 ' 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.592 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.593 11:03:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.876 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:24.877 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:24.877 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:24.877 Found net devices under 0000:31:00.0: cvl_0_0 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:24.877 Found net devices under 0000:31:00.1: cvl_0_1 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.877 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:09:24.877 00:09:24.877 --- 10.0.0.2 ping statistics --- 00:09:24.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.877 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:09:24.877 00:09:24.877 --- 10.0.0.1 ping statistics --- 00:09:24.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.877 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.877 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3956954 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3956954 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3956954 ']' 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.141 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.141 [2024-11-19 11:03:33.279769] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:09:25.141 [2024-11-19 11:03:33.279836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.141 [2024-11-19 11:03:33.390531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.141 [2024-11-19 11:03:33.440972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.141 [2024-11-19 11:03:33.441024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.141 [2024-11-19 11:03:33.441032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.141 [2024-11-19 11:03:33.441040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.141 [2024-11-19 11:03:33.441046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.141 [2024-11-19 11:03:33.441836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 [2024-11-19 11:03:34.138743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 Malloc0 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.083 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.083 [2024-11-19 11:03:34.183964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3957043 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3957043 /var/tmp/bdevperf.sock 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3957043 ']' 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:26.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.084 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.084 [2024-11-19 11:03:34.253176] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:09:26.084 [2024-11-19 11:03:34.253245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957043 ] 00:09:26.084 [2024-11-19 11:03:34.337061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.084 [2024-11-19 11:03:34.378669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.024 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.024 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:27.025 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:27.025 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.025 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.025 NVMe0n1 00:09:27.025 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.025 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:27.025 Running I/O for 10 seconds... 00:09:29.349 8762.00 IOPS, 34.23 MiB/s [2024-11-19T10:03:38.642Z] 9679.50 IOPS, 37.81 MiB/s [2024-11-19T10:03:39.583Z] 10244.00 IOPS, 40.02 MiB/s [2024-11-19T10:03:40.524Z] 10671.00 IOPS, 41.68 MiB/s [2024-11-19T10:03:41.465Z] 10854.60 IOPS, 42.40 MiB/s [2024-11-19T10:03:42.405Z] 10954.83 IOPS, 42.79 MiB/s [2024-11-19T10:03:43.789Z] 11113.43 IOPS, 43.41 MiB/s [2024-11-19T10:03:44.730Z] 11138.38 IOPS, 43.51 MiB/s [2024-11-19T10:03:45.673Z] 11225.67 IOPS, 43.85 MiB/s [2024-11-19T10:03:45.673Z] 11263.50 IOPS, 44.00 MiB/s 00:09:37.322 Latency(us) 00:09:37.322 [2024-11-19T10:03:45.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.322 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:37.322 Verification LBA range: start 0x0 length 0x4000 00:09:37.322 NVMe0n1 : 10.07 11292.12 44.11 0.00 0.00 90391.52 24248.32 68157.44 00:09:37.322 [2024-11-19T10:03:45.674Z] =================================================================================================================== 00:09:37.322 [2024-11-19T10:03:45.674Z] Total : 11292.12 44.11 0.00 0.00 90391.52 24248.32 68157.44 00:09:37.322 { 00:09:37.322 "results": [ 00:09:37.322 { 00:09:37.322 "job": "NVMe0n1", 00:09:37.322 "core_mask": "0x1", 00:09:37.322 "workload": "verify", 00:09:37.322 "status": "finished", 00:09:37.322 "verify_range": { 00:09:37.322 "start": 0, 00:09:37.322 "length": 16384 00:09:37.322 }, 00:09:37.322 "queue_depth": 1024, 00:09:37.322 "io_size": 4096, 00:09:37.322 "runtime": 10.065339, 00:09:37.322 "iops": 11292.118427407164, 00:09:37.322 "mibps": 44.109837607059234, 00:09:37.322 "io_failed": 0, 00:09:37.322 "io_timeout": 0, 00:09:37.322 "avg_latency_us": 90391.52268639821, 00:09:37.322 "min_latency_us": 24248.32, 00:09:37.322 "max_latency_us": 68157.44 00:09:37.322 } 00:09:37.322 ], 00:09:37.322 "core_count": 1 00:09:37.322 } 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3957043 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3957043 ']' 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3957043 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3957043 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3957043' 00:09:37.322 killing process with pid 3957043 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3957043 00:09:37.322 Received shutdown signal, test time was about 10.000000 seconds 00:09:37.322 00:09:37.322 Latency(us) 00:09:37.322 [2024-11-19T10:03:45.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.322 [2024-11-19T10:03:45.674Z] =================================================================================================================== 00:09:37.322 [2024-11-19T10:03:45.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3957043 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:37.322 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.583 rmmod nvme_tcp 00:09:37.583 rmmod nvme_fabrics 00:09:37.583 rmmod nvme_keyring 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3956954 ']' 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3956954 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3956954 ']' 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3956954 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3956954 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3956954' 00:09:37.583 killing process with pid 3956954 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3956954 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3956954 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.583 11:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.132 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.132 00:09:40.132 real 0m23.297s 00:09:40.132 user 0m26.245s 00:09:40.132 sys 0m7.442s 00:09:40.132 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.132 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.132 ************************************ 00:09:40.132 END TEST nvmf_queue_depth 00:09:40.132 ************************************ 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.132 ************************************ 00:09:40.132 START TEST nvmf_target_multipath 00:09:40.132 ************************************ 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:40.132 * Looking for test storage... 00:09:40.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.132 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:40.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.133 --rc genhtml_branch_coverage=1 00:09:40.133 --rc genhtml_function_coverage=1 00:09:40.133 --rc genhtml_legend=1 00:09:40.133 --rc geninfo_all_blocks=1 00:09:40.133 --rc geninfo_unexecuted_blocks=1 00:09:40.133 00:09:40.133 ' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:40.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.133 --rc genhtml_branch_coverage=1 00:09:40.133 --rc genhtml_function_coverage=1 00:09:40.133 --rc genhtml_legend=1 00:09:40.133 --rc geninfo_all_blocks=1 00:09:40.133 --rc geninfo_unexecuted_blocks=1 00:09:40.133 00:09:40.133 ' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:40.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.133 --rc genhtml_branch_coverage=1 00:09:40.133 --rc genhtml_function_coverage=1 00:09:40.133 --rc genhtml_legend=1 00:09:40.133 --rc geninfo_all_blocks=1 00:09:40.133 --rc geninfo_unexecuted_blocks=1 00:09:40.133 00:09:40.133 ' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:40.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.133 --rc genhtml_branch_coverage=1 00:09:40.133 --rc genhtml_function_coverage=1 00:09:40.133 --rc genhtml_legend=1 00:09:40.133 --rc geninfo_all_blocks=1 00:09:40.133 --rc geninfo_unexecuted_blocks=1 00:09:40.133 00:09:40.133 ' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:40.133 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:48.273 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.273 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:48.273 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:48.273 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:48.273 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:48.274 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:48.274 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:48.274 Found net devices under 0000:31:00.0: cvl_0_0 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:48.274 Found net devices under 0000:31:00.1: cvl_0_1 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:48.274 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:48.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:09:48.275 00:09:48.275 --- 10.0.0.2 ping statistics --- 00:09:48.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.275 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:09:48.275 00:09:48.275 --- 10.0.0.1 ping statistics --- 00:09:48.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.275 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:48.275 only one NIC for nvmf test 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.275 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.275 rmmod nvme_tcp 00:09:48.536 rmmod nvme_fabrics 00:09:48.536 rmmod nvme_keyring 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.536 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.450 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.711 00:09:50.711 real 0m10.750s 00:09:50.711 user 0m2.359s 00:09:50.711 sys 0m6.310s 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:50.711 ************************************ 00:09:50.711 END TEST nvmf_target_multipath 00:09:50.711 ************************************ 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.711 ************************************ 00:09:50.711 START TEST nvmf_zcopy 00:09:50.711 ************************************ 00:09:50.711 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:50.711 * Looking for test storage... 00:09:50.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.711 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.711 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.711 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.972 --rc genhtml_branch_coverage=1 00:09:50.972 --rc genhtml_function_coverage=1 00:09:50.972 --rc genhtml_legend=1 00:09:50.972 --rc geninfo_all_blocks=1 00:09:50.972 --rc geninfo_unexecuted_blocks=1 00:09:50.972 00:09:50.972 ' 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.972 --rc genhtml_branch_coverage=1 00:09:50.972 --rc genhtml_function_coverage=1 00:09:50.972 --rc genhtml_legend=1 00:09:50.972 --rc geninfo_all_blocks=1 00:09:50.972 --rc geninfo_unexecuted_blocks=1 00:09:50.972 00:09:50.972 ' 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.972 --rc genhtml_branch_coverage=1 00:09:50.972 --rc genhtml_function_coverage=1 00:09:50.972 --rc genhtml_legend=1 00:09:50.972 --rc geninfo_all_blocks=1 00:09:50.972 --rc geninfo_unexecuted_blocks=1 00:09:50.972 00:09:50.972 ' 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.972 --rc genhtml_branch_coverage=1 00:09:50.972 --rc genhtml_function_coverage=1 00:09:50.972 --rc genhtml_legend=1 00:09:50.972 --rc geninfo_all_blocks=1 00:09:50.972 --rc geninfo_unexecuted_blocks=1 00:09:50.972 00:09:50.972 ' 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.972 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.973 11:03:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:59.116 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:59.116 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:59.116 Found net devices under 0000:31:00.0: cvl_0_0 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:59.116 Found net devices under 0000:31:00.1: cvl_0_1 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.116 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:09:59.117 00:09:59.117 --- 10.0.0.2 ping statistics --- 00:09:59.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.117 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:09:59.117 00:09:59.117 --- 10.0.0.1 ping statistics --- 00:09:59.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.117 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.117 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3968953 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3968953 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3968953 ']' 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.380 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.380 [2024-11-19 11:04:07.550696] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:09:59.380 [2024-11-19 11:04:07.550761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.380 [2024-11-19 11:04:07.660800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.380 [2024-11-19 11:04:07.711211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.380 [2024-11-19 11:04:07.711266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.380 [2024-11-19 11:04:07.711275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.380 [2024-11-19 11:04:07.711282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.380 [2024-11-19 11:04:07.711288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.380 [2024-11-19 11:04:07.712078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 [2024-11-19 11:04:08.413128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 [2024-11-19 11:04:08.429429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 malloc0 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.326 { 00:10:00.326 "params": { 00:10:00.326 "name": "Nvme$subsystem", 00:10:00.326 "trtype": "$TEST_TRANSPORT", 00:10:00.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.326 "adrfam": "ipv4", 00:10:00.326 "trsvcid": "$NVMF_PORT", 00:10:00.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.326 "hdgst": ${hdgst:-false}, 00:10:00.326 "ddgst": ${ddgst:-false} 00:10:00.326 }, 00:10:00.326 "method": "bdev_nvme_attach_controller" 00:10:00.326 } 00:10:00.326 EOF 00:10:00.326 )") 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:00.326 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.326 "params": { 00:10:00.326 "name": "Nvme1", 00:10:00.326 "trtype": "tcp", 00:10:00.326 "traddr": "10.0.0.2", 00:10:00.326 "adrfam": "ipv4", 00:10:00.326 "trsvcid": "4420", 00:10:00.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.326 "hdgst": false, 00:10:00.326 "ddgst": false 00:10:00.326 }, 00:10:00.326 "method": "bdev_nvme_attach_controller" 00:10:00.326 }' 00:10:00.326 [2024-11-19 11:04:08.518579] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:10:00.326 [2024-11-19 11:04:08.518648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3969066 ] 00:10:00.326 [2024-11-19 11:04:08.605123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.326 [2024-11-19 11:04:08.646962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.586 Running I/O for 10 seconds... 00:10:02.486 6650.00 IOPS, 51.95 MiB/s [2024-11-19T10:04:12.223Z] 6708.50 IOPS, 52.41 MiB/s [2024-11-19T10:04:13.168Z] 7684.00 IOPS, 60.03 MiB/s [2024-11-19T10:04:14.111Z] 8207.25 IOPS, 64.12 MiB/s [2024-11-19T10:04:15.055Z] 8526.60 IOPS, 66.61 MiB/s [2024-11-19T10:04:15.997Z] 8738.00 IOPS, 68.27 MiB/s [2024-11-19T10:04:16.939Z] 8889.29 IOPS, 69.45 MiB/s [2024-11-19T10:04:17.880Z] 9003.38 IOPS, 70.34 MiB/s [2024-11-19T10:04:19.265Z] 9091.11 IOPS, 71.02 MiB/s [2024-11-19T10:04:19.265Z] 9161.90 IOPS, 71.58 MiB/s 00:10:10.913 Latency(us) 00:10:10.913 [2024-11-19T10:04:19.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.913 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:10.913 Verification LBA range: start 0x0 length 0x1000 00:10:10.913 Nvme1n1 : 10.01 9163.89 71.59 0.00 0.00 13916.00 1843.20 28617.39 00:10:10.913 [2024-11-19T10:04:19.265Z] =================================================================================================================== 00:10:10.913 [2024-11-19T10:04:19.265Z] Total : 9163.89 71.59 0.00 0.00 13916.00 1843.20 28617.39 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3971105 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:10.913 { 00:10:10.913 "params": { 00:10:10.913 "name": "Nvme$subsystem", 00:10:10.913 "trtype": "$TEST_TRANSPORT", 00:10:10.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.913 "adrfam": "ipv4", 00:10:10.913 "trsvcid": "$NVMF_PORT", 00:10:10.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.913 "hdgst": ${hdgst:-false}, 00:10:10.913 "ddgst": ${ddgst:-false} 00:10:10.913 }, 00:10:10.913 "method": "bdev_nvme_attach_controller" 00:10:10.913 } 00:10:10.913 EOF 00:10:10.913 )") 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:10.913 [2024-11-19 11:04:18.986046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.913 [2024-11-19 11:04:18.986077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:10.913 [2024-11-19 11:04:18.994030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.913 [2024-11-19 11:04:18.994039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:10.913 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:10.913 "params": { 00:10:10.913 "name": "Nvme1", 00:10:10.913 "trtype": "tcp", 00:10:10.913 "traddr": "10.0.0.2", 00:10:10.913 "adrfam": "ipv4", 00:10:10.913 "trsvcid": "4420", 00:10:10.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.913 "hdgst": false, 00:10:10.913 "ddgst": false 00:10:10.913 }, 00:10:10.913 "method": "bdev_nvme_attach_controller" 00:10:10.913 }' 00:10:10.913 [2024-11-19 11:04:19.002049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.913 [2024-11-19 11:04:19.002057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.913 [2024-11-19 11:04:19.010069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.913 [2024-11-19 11:04:19.010077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.913 [2024-11-19 11:04:19.018091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.913 [2024-11-19 11:04:19.018098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.913 [2024-11-19 11:04:19.026111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.913 [2024-11-19 11:04:19.026123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.913 [2024-11-19 11:04:19.028327] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:10:10.913 [2024-11-19 11:04:19.028375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971105 ] 00:10:10.913 [2024-11-19 11:04:19.034133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.913 [2024-11-19 11:04:19.034141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.913 [2024-11-19 11:04:19.042152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.913 [2024-11-19 11:04:19.042160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.913 [2024-11-19 11:04:19.050174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.050181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.058194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.058202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.066213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.066221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.074234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.074241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.082254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.082262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.090274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.090281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.098305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.098313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.104773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.914 [2024-11-19 11:04:19.106315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.106322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.114337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.114345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.122357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.122364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.130379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.130387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.138398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.138407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.140611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.914 [2024-11-19 11:04:19.146418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.146426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.154442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.154457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.162464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.162476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.170482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.170491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.178500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.178508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.186520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.186529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.194540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.194548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.202561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.202569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.210581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.210588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.218619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.218636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.226628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.226637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.234648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.234658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.242667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.242676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.250687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.250695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.914 [2024-11-19 11:04:19.258709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.914 [2024-11-19 11:04:19.258716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.266728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.266736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.274751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.274759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.282773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.282782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.290795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.290803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.298817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.298829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.306836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.306848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.314900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.314916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.322881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.322890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 Running I/O for 5 seconds... 00:10:11.174 [2024-11-19 11:04:19.330900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.330908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.174 [2024-11-19 11:04:19.341644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.174 [2024-11-19 11:04:19.341660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.349632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.349648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.358346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.358362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.367378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.367393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.376720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.376735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.385392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.385407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.394624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.394639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.402639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.402654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.411463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.411478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.419907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.419924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.428554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.428569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.436801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.436816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.445837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.445852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.454461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.454475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.463079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.463095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.471869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.471884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.480929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.480943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.490104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.490118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.499049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.499064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.507839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.507854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.516687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.516702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.175 [2024-11-19 11:04:19.525175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.175 [2024-11-19 11:04:19.525189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.533885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.533900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.542590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.542605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.551835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.551850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.560424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.560438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.569312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.569327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.577672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.577686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.586187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.586201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.595294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.595308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.604264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.604278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.613415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.613429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.621889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.621903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.630216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.630230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.639473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.639487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.648429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.648443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.657509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.657523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.666043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.666057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.674733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.674748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.683100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.683114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.692155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.692169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.700742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.700757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.709461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.709475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.718451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.718466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.727607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.727621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.736648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.736661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.745190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.745204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.753611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.753625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.762337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.762351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.771069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.771083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.435 [2024-11-19 11:04:19.779685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.435 [2024-11-19 11:04:19.779700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.696 [2024-11-19 11:04:19.788468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.696 [2024-11-19 11:04:19.788482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.696 [2024-11-19 11:04:19.797490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.696 [2024-11-19 11:04:19.797504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.696 [2024-11-19 11:04:19.806045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.696 [2024-11-19 11:04:19.806059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.696 [2024-11-19 11:04:19.815017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.696 [2024-11-19 11:04:19.815031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.696 [2024-11-19 11:04:19.823926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.696 [2024-11-19 11:04:19.823941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.832666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.832681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.841820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.841834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.850462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.850477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.858996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.859009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.867500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.867514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.875793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.875806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.885025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.885039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.893038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.893052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.901452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.901466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.910685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.910699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.919187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.919201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.928254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.928268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.936842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.936856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.945543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.945557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.953438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.953452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.962766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.962783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.971284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.971298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.980097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.980112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.989002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.989016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:19.997659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:19.997673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:20.010849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:20.010871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:20.018782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:20.018796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:20.027657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:20.027672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:20.036814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:20.036829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.697 [2024-11-19 11:04:20.046023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.697 [2024-11-19 11:04:20.046038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.958 [2024-11-19 11:04:20.054620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.958 [2024-11-19 11:04:20.054635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.958 [2024-11-19 11:04:20.063597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.958 [2024-11-19 11:04:20.063612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.958 [2024-11-19 11:04:20.072463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.072477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.081404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.081419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.090035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.090049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.098643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.098657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.107421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.107435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.115915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.115930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.124646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.124660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.133567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.133586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.142669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.142683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.151738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.151752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.160846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.160861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.169819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.169833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.178662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.178677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.187195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.187209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.195986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.196000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.204429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.204443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.213560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.213574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.222077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.222091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.230857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.230876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.239523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.239538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.248182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.248196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.257037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.257051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.266091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.266105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.274762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.274776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.283471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.283485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.292450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.292464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.959 [2024-11-19 11:04:20.301479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.959 [2024-11-19 11:04:20.301498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.310232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.310246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.318854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.318874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.327175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.327189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 19026.00 IOPS, 148.64 MiB/s [2024-11-19T10:04:20.572Z] [2024-11-19 11:04:20.335486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.335503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.344169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.344184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.353181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.353195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.361752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.361766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.370530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.370543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.379717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.379731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.388279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.388293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.397108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.397123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.405670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.405684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.414379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.414394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.423158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.423171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.432132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.432146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.440320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.440334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.449321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.449335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.458057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.458071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.466922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.466936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.475998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.476013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.484679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.484693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.493488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.493502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.502437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.502451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.511378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.511393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.520462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.520477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.528880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.528894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.537718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.537733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.546529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.546544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.555143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.555157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.220 [2024-11-19 11:04:20.563762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.220 [2024-11-19 11:04:20.563777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.481 [2024-11-19 11:04:20.572846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.481 [2024-11-19 11:04:20.572866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.481 [2024-11-19 11:04:20.580849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.481 [2024-11-19 11:04:20.580868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.589843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.589858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.598717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.598732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.607479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.607493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.616173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.616188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.625322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.625337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.633945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.633960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.642153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.642169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.650818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.650833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.659508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.659522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.667828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.667842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.676971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.676986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.684745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.684759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.693794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.693809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.702027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.702041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.710888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.710903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.719602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.719617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.728120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.728135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.736994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.737009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.746145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.746160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.754545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.754559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.763481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.763496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.772510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.772524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.781266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.781280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.790169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.790184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.798807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.798821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.807555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.807570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.816198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.816212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.482 [2024-11-19 11:04:20.825148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.482 [2024-11-19 11:04:20.825163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.743 [2024-11-19 11:04:20.833687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.743 [2024-11-19 11:04:20.833702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.743 [2024-11-19 11:04:20.842512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.743 [2024-11-19 11:04:20.842526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.743 [2024-11-19 11:04:20.851576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.743 [2024-11-19 11:04:20.851591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.743 [2024-11-19 11:04:20.860722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.743 [2024-11-19 11:04:20.860737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.869779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.869793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.878356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.878371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.887572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.887587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.895703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.895717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.904587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.904601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.913830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.913845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.922368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.922382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.931134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.931149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.940274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.940289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.948716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.948731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.957710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.957724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.966750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.966764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.975956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.975971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.984455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.984469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:20.993672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:20.993686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.002114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.002129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.011322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.011337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.020372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.020387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.028908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.028923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.038078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.038093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.046748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.046763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.055305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.055319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.064058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.064072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.072636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.072650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.081761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.081776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.744 [2024-11-19 11:04:21.090842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.744 [2024-11-19 11:04:21.090857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.099925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.099941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.108676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.108690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.117514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.117528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.126567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.126586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.135904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.135919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.144468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.144483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.153185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.153200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.161659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.161673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.170387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.170402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.178854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.178873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.187592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.187607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.196177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.196191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.204928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.005 [2024-11-19 11:04:21.204943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.005 [2024-11-19 11:04:21.214096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.214111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.222704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.222718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.232073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.232087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.240450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.240464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.249332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.249347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.258385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.258400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.267492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.267507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.276655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.276670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.285630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.285644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.294540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.294558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.303466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.303480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.312357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.312371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.321030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.321043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.329994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.330008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 19134.00 IOPS, 149.48 MiB/s [2024-11-19T10:04:21.358Z] [2024-11-19 11:04:21.338500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.338514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.347385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.347400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.006 [2024-11-19 11:04:21.355868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.006 [2024-11-19 11:04:21.355883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.364327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.364343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.373251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.373266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.381769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.381783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.390683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.390697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.399420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.399434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.408285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.408299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.417589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.417603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.426647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.426661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.435129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.435143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.443732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.443746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.452190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.452204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.461231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.461249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.469650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.469664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.478608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.478622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.487082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.487097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.496238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.496252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.504867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.504881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.513677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.513692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.522751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.522765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.531242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.531256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.539707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.539721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.548540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.548555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.557259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.557273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.267 [2024-11-19 11:04:21.566531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.267 [2024-11-19 11:04:21.566545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.268 [2024-11-19 11:04:21.575047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.268 [2024-11-19 11:04:21.575061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.268 [2024-11-19 11:04:21.583796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.268 [2024-11-19 11:04:21.583810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.268 [2024-11-19 11:04:21.592959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.268 [2024-11-19 11:04:21.592974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.268 [2024-11-19 11:04:21.601562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.268 [2024-11-19 11:04:21.601576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.268 [2024-11-19 11:04:21.610640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.268 [2024-11-19 11:04:21.610654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.619138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.619152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.628214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.628228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.636692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.636706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.645432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.645446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.654609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.654623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.663264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.663279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.672156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.672171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.680473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.680487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.689234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.689248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.698247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.698261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.707055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.707070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.716184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.716198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.724852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.724870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.734193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.734208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.743242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.743257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.752039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.752053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.761250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.761265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.769080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.769093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.778368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.778382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.786378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.786392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.795466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.795480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.804231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.804245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.813031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.813046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.821718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.821732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.831029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.831044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.839605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.839618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.848670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.848684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.857424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.857439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.866536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.866550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.529 [2024-11-19 11:04:21.875681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.529 [2024-11-19 11:04:21.875695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.884238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.884252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.892965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.892979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.901658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.901672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.910678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.910692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.919277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.919291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.928348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.928362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.937494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.937508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.946357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.946372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.954775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.954790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.963925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.963940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.972954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.972968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.981666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.981680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.990443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.990457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:21.999420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:21.999434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.007491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.007505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.016490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.016504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.024417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.024431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.033402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.033416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.041941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.041955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.050953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.050967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.059841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.059855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.068557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.068571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.077243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.077257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.086396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.086410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.095582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.095596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.103989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.104003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.112423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.112437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.121605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.121623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.130407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.130421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.790 [2024-11-19 11:04:22.139471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.790 [2024-11-19 11:04:22.139485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.147982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.147997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.156504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.156518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.164962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.164976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.174266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.174281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.182815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.182829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.191208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.191222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.199882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.199897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.208422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.208436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.217000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.217014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.225670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.051 [2024-11-19 11:04:22.225684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.051 [2024-11-19 11:04:22.234834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.234848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.244084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.244098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.253166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.253180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.261932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.261946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.270773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.270787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.279804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.279819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.288307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.288329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.297134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.297148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.306423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.306437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.315342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.315356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.324722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.324737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.333704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.333718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 19185.67 IOPS, 149.89 MiB/s [2024-11-19T10:04:22.404Z] [2024-11-19 11:04:22.341909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.341924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.350603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.350617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.359699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.359714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.368798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.368813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.377434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.377449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.386331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.386345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.052 [2024-11-19 11:04:22.395140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.052 [2024-11-19 11:04:22.395155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.312 [2024-11-19 11:04:22.404382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.312 [2024-11-19 11:04:22.404397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.412284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.412299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.421520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.421535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.430743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.430758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.439156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.439171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.447985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.447999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.456483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.456501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.465210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.465224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.474457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.474472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.482335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.482349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.491315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.491329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.500445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.500460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.508984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.508998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.518117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.518131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.527201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.527215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.535631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.535645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.544802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.544816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.553292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.553307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.561438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.561453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.570563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.570577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.579456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.579471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.588384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.588399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.597389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.597403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.605894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.605908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.614271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.614286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.623289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.623307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.631903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.631917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.640899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.640914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.649919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.649935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.313 [2024-11-19 11:04:22.658606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.313 [2024-11-19 11:04:22.658620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.667723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.667738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.676750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.676765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.685853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.685871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.694720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.694734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.703674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.703689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.712053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.712067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.720710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.720724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.729148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.729163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.737558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.737572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.746805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.746820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.755030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.755044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.763984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.763999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.772997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.773012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.781849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.781868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.790214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.790228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.799051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.799065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.808026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.808041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.817174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.817188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.826212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.826226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.835065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.835080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.843588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.843603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.852422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.852436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.861645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.861660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.870725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.870739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.879228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.879243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.887939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.887954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.896455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.896469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.574 [2024-11-19 11:04:22.905120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.574 [2024-11-19 11:04:22.905135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.575 [2024-11-19 11:04:22.914237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.575 [2024-11-19 11:04:22.914251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.575 [2024-11-19 11:04:22.922817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.575 [2024-11-19 11:04:22.922831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.835 [2024-11-19 11:04:22.931593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:22.931607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:22.940686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:22.940701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:22.949640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:22.949655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:22.958651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:22.958666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:22.967593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:22.967608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:22.976283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:22.976297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:22.985075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:22.985089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:22.993925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:22.993940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.002872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.002886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.011348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.011362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.020486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.020500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.028762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.028775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.037524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.037538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.046443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.046457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.055055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.055069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.064055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.064069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.072474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.072488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.080993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.081007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.089883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.089897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.098812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.098826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.107998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.108012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.115894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.115908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.125043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.125058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.133485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.133499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.141993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.142007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.150436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.150450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.159188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.159203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.168217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.168231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.176920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.176934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.836 [2024-11-19 11:04:23.185901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.836 [2024-11-19 11:04:23.185915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.195129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.195143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.203482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.203497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.212124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.212139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.220855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.220873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.229369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.229383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.238315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.238328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.246868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.246882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.254800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.254814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.268424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.268439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.276255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.276270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.285276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.285293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.293319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.293333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.302118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.302132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.310811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.310826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.319177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.319191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.328076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.328090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.337196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.337210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 19212.50 IOPS, 150.10 MiB/s [2024-11-19T10:04:23.448Z] [2024-11-19 11:04:23.346169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.346183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.355005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.355019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.364048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.364062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.096 [2024-11-19 11:04:23.372612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.096 [2024-11-19 11:04:23.372626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.097 [2024-11-19 11:04:23.381267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.097 [2024-11-19 11:04:23.381281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.097 [2024-11-19 11:04:23.390034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.097 [2024-11-19 11:04:23.390048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.097 [2024-11-19 11:04:23.398628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.097 [2024-11-19 11:04:23.398642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.097 [2024-11-19 11:04:23.407379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.097 [2024-11-19 11:04:23.407394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.097 [2024-11-19 11:04:23.416276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.097 [2024-11-19 11:04:23.416291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.097 [2024-11-19 11:04:23.424704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.097 [2024-11-19 11:04:23.424718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.097 [2024-11-19 11:04:23.433471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.097 [2024-11-19 11:04:23.433486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.097 [2024-11-19 11:04:23.442450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.097 [2024-11-19 11:04:23.442464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.451400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.451418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.460240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.460254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.468770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.468785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.477842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.477856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.486170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.486184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.494531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.494545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.503533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.503547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.512341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.512355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.521056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.521069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.530047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.530061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.538946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.538960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.547638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.547652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.556239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.556253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.565331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.565345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.574120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.574134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.583160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.583174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.591948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.591962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.600702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.600716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.609901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.609915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.618517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.618534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.626982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.626996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.636124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.636138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.644589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.644603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.653290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.653304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.662170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.662184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.670901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.670916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.679677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.679691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.688447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.688461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.697636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.697649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.358 [2024-11-19 11:04:23.706339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.358 [2024-11-19 11:04:23.706353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.715209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.715223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.723948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.723962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.732489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.732503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.741328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.741342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.750072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.750086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.758616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.758631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.767413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.767427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.776712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.776726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.785789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.785803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.794705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.794719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.802672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.802686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.811346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.811361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.820084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.820098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.829079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.829093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.838134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.838148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.846778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.846792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.855970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.855984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.864721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.864735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.873250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.873265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.881997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.882011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.891221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.891235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.900151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.900165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.908618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.908633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.917443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.620 [2024-11-19 11:04:23.917457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.620 [2024-11-19 11:04:23.926548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.621 [2024-11-19 11:04:23.926563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.621 [2024-11-19 11:04:23.935669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.621 [2024-11-19 11:04:23.935683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.621 [2024-11-19 11:04:23.944258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.621 [2024-11-19 11:04:23.944272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.621 [2024-11-19 11:04:23.953316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.621 [2024-11-19 11:04:23.953330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.621 [2024-11-19 11:04:23.962032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.621 [2024-11-19 11:04:23.962046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.621 [2024-11-19 11:04:23.970710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.621 [2024-11-19 11:04:23.970724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:23.979778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:23.979793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:23.988941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:23.988955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:23.998094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:23.998108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.007252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.007267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.016404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.016418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.025486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.025500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.034038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.034053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.042287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.042302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.051385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.051400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.059731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.059745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.068195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.068209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.076690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.076704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.085676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.085690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.094921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.094935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.103373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.103388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.111188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.111202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.120089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.120103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.128600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.128614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.137591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.137606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.146839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.146853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.155337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.155351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.164071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.164086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.172671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.172685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.181949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.181964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.191079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.191094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.200168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.200183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.208663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.208677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.217369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.217384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.226432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.226446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.235343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.235357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.899 [2024-11-19 11:04:24.244436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.899 [2024-11-19 11:04:24.244451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.252749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.252763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.261557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.261571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.270369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.270383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.278923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.278938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.287435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.287449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.296468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.296483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.305535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.305549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.314262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.314276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.322760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.322775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.332086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.332101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 19225.40 IOPS, 150.20 MiB/s [2024-11-19T10:04:24.514Z] [2024-11-19 11:04:24.341141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.341156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.346804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.346818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 00:10:16.162 Latency(us) 00:10:16.162 [2024-11-19T10:04:24.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.162 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:16.162 Nvme1n1 : 5.01 19225.21 150.20 0.00 0.00 6651.29 2525.87 16493.23 00:10:16.162 [2024-11-19T10:04:24.514Z] =================================================================================================================== 00:10:16.162 [2024-11-19T10:04:24.514Z] Total : 19225.21 150.20 0.00 0.00 6651.29 2525.87 16493.23 00:10:16.162 [2024-11-19 11:04:24.354820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.354831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.362840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.362850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.370867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.370878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.378889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.378900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.386906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.386916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.394921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.394931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.402939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.402948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.410959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.410974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.418980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.418989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.427000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.427008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.435021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.435029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.443042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.443052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.451060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.451068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 [2024-11-19 11:04:24.459082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.162 [2024-11-19 11:04:24.459090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3971105) - No such process 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3971105 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.162 delay0 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.162 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:16.423 [2024-11-19 11:04:24.641058] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:24.714 Initializing NVMe Controllers 00:10:24.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:24.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:24.714 Initialization complete. Launching workers. 00:10:24.714 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 245, failed: 30008 00:10:24.714 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30124, failed to submit 129 00:10:24.714 success 30048, unsuccessful 76, failed 0 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.714 rmmod nvme_tcp 00:10:24.714 rmmod nvme_fabrics 00:10:24.714 rmmod nvme_keyring 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3968953 ']' 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3968953 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3968953 ']' 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3968953 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3968953 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3968953' 00:10:24.714 killing process with pid 3968953 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3968953 00:10:24.714 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3968953 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.714 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.101 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:26.101 00:10:26.101 real 0m35.227s 00:10:26.101 user 0m46.045s 00:10:26.101 sys 0m12.367s 00:10:26.101 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.101 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:26.101 ************************************ 00:10:26.101 END TEST nvmf_zcopy 00:10:26.101 ************************************ 00:10:26.101 11:04:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:26.101 11:04:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.101 11:04:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.101 11:04:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.101 ************************************ 00:10:26.101 START TEST nvmf_nmic 00:10:26.101 ************************************ 00:10:26.101 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:26.101 * Looking for test storage... 00:10:26.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.102 --rc genhtml_branch_coverage=1 00:10:26.102 --rc genhtml_function_coverage=1 00:10:26.102 --rc genhtml_legend=1 00:10:26.102 --rc geninfo_all_blocks=1 00:10:26.102 --rc geninfo_unexecuted_blocks=1 00:10:26.102 00:10:26.102 ' 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.102 --rc genhtml_branch_coverage=1 00:10:26.102 --rc genhtml_function_coverage=1 00:10:26.102 --rc genhtml_legend=1 00:10:26.102 --rc geninfo_all_blocks=1 00:10:26.102 --rc geninfo_unexecuted_blocks=1 00:10:26.102 00:10:26.102 ' 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.102 --rc genhtml_branch_coverage=1 00:10:26.102 --rc genhtml_function_coverage=1 00:10:26.102 --rc genhtml_legend=1 00:10:26.102 --rc geninfo_all_blocks=1 00:10:26.102 --rc geninfo_unexecuted_blocks=1 00:10:26.102 00:10:26.102 ' 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.102 --rc genhtml_branch_coverage=1 00:10:26.102 --rc genhtml_function_coverage=1 00:10:26.102 --rc genhtml_legend=1 00:10:26.102 --rc geninfo_all_blocks=1 00:10:26.102 --rc geninfo_unexecuted_blocks=1 00:10:26.102 00:10:26.102 ' 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:26.102 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:26.103 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.247 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:34.248 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:34.248 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:34.248 Found net devices under 0000:31:00.0: cvl_0_0 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:34.248 Found net devices under 0000:31:00.1: cvl_0_1 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.248 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:34.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:10:34.510 00:10:34.510 --- 10.0.0.2 ping statistics --- 00:10:34.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.510 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:10:34.510 00:10:34.510 --- 10.0.0.1 ping statistics --- 00:10:34.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.510 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3978471 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3978471 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3978471 ']' 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.510 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.771 [2024-11-19 11:04:42.887250] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:10:34.771 [2024-11-19 11:04:42.887321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.771 [2024-11-19 11:04:42.981764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.771 [2024-11-19 11:04:43.024795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.771 [2024-11-19 11:04:43.024829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.771 [2024-11-19 11:04:43.024837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.771 [2024-11-19 11:04:43.024844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.771 [2024-11-19 11:04:43.024850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.771 [2024-11-19 11:04:43.026561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.771 [2024-11-19 11:04:43.026683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.771 [2024-11-19 11:04:43.026810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.771 [2024-11-19 11:04:43.026811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.342 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.342 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:35.342 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:35.342 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.342 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 [2024-11-19 11:04:43.717595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 Malloc0 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 [2024-11-19 11:04:43.793191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:35.603 test case1: single bdev can't be used in multiple subsystems 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.603 [2024-11-19 11:04:43.829123] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:35.603 [2024-11-19 11:04:43.829142] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:35.603 [2024-11-19 11:04:43.829150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.603 request: 00:10:35.603 { 00:10:35.603 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:35.603 "namespace": { 00:10:35.603 "bdev_name": "Malloc0", 00:10:35.603 "no_auto_visible": false 00:10:35.603 }, 00:10:35.603 "method": "nvmf_subsystem_add_ns", 00:10:35.603 "req_id": 1 00:10:35.603 } 00:10:35.603 Got JSON-RPC error response 00:10:35.603 response: 00:10:35.603 { 00:10:35.603 "code": -32602, 00:10:35.603 "message": "Invalid parameters" 00:10:35.603 } 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:35.603 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:35.604 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:35.604 Adding namespace failed - expected result. 00:10:35.604 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:35.604 test case2: host connect to nvmf target in multiple paths 00:10:35.604 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:35.604 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.604 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.604 [2024-11-19 11:04:43.841289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:35.604 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.604 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.519 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:38.899 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.899 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:38.899 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.899 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:38.899 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.811 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.811 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.811 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.811 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:40.811 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.811 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:40.811 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.811 [global] 00:10:40.811 thread=1 00:10:40.811 invalidate=1 00:10:40.811 rw=write 00:10:40.811 time_based=1 00:10:40.811 runtime=1 00:10:40.811 ioengine=libaio 00:10:40.811 direct=1 00:10:40.811 bs=4096 00:10:40.811 iodepth=1 00:10:40.811 norandommap=0 00:10:40.811 numjobs=1 00:10:40.811 00:10:40.811 verify_dump=1 00:10:40.811 verify_backlog=512 00:10:40.811 verify_state_save=0 00:10:40.811 do_verify=1 00:10:40.811 verify=crc32c-intel 00:10:40.811 [job0] 00:10:40.811 filename=/dev/nvme0n1 00:10:40.811 Could not set queue depth (nvme0n1) 00:10:41.071 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.071 fio-3.35 00:10:41.071 Starting 1 thread 00:10:42.458 00:10:42.458 job0: (groupid=0, jobs=1): err= 0: pid=3980019: Tue Nov 19 11:04:50 2024 00:10:42.458 read: IOPS=555, BW=2222KiB/s (2275kB/s)(2224KiB/1001msec) 00:10:42.458 slat (nsec): min=6555, max=62326, avg=23390.43, stdev=6755.58 00:10:42.458 clat (usec): min=409, max=2223, avg=728.19, stdev=139.50 00:10:42.458 lat (usec): min=417, max=2249, avg=751.58, stdev=140.68 00:10:42.458 clat percentiles (usec): 00:10:42.458 | 1.00th=[ 474], 5.00th=[ 553], 10.00th=[ 578], 20.00th=[ 644], 00:10:42.458 | 30.00th=[ 668], 40.00th=[ 685], 50.00th=[ 717], 60.00th=[ 758], 00:10:42.458 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 873], 00:10:42.458 | 99.00th=[ 938], 99.50th=[ 1795], 99.90th=[ 2212], 99.95th=[ 2212], 00:10:42.458 | 99.99th=[ 2212] 00:10:42.458 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:42.458 slat (nsec): min=9397, max=67391, avg=30692.22, stdev=7913.24 00:10:42.458 clat (usec): min=151, max=800, avg=526.43, stdev=101.24 00:10:42.458 lat (usec): min=161, max=833, avg=557.12, stdev=104.07 00:10:42.458 clat percentiles (usec): 00:10:42.458 | 1.00th=[ 258], 5.00th=[ 359], 10.00th=[ 396], 20.00th=[ 441], 00:10:42.458 | 30.00th=[ 482], 40.00th=[ 502], 50.00th=[ 523], 60.00th=[ 553], 00:10:42.458 | 70.00th=[ 594], 80.00th=[ 619], 90.00th=[ 660], 95.00th=[ 676], 00:10:42.458 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 766], 99.95th=[ 799], 00:10:42.458 | 99.99th=[ 799] 00:10:42.458 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:42.458 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:42.458 lat (usec) : 250=0.32%, 500=25.57%, 750=59.49%, 1000=14.37% 00:10:42.458 lat (msec) : 2=0.19%, 4=0.06% 00:10:42.458 cpu : usr=3.00%, sys=3.90%, ctx=1580, majf=0, minf=1 00:10:42.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.458 issued rwts: total=556,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.458 00:10:42.458 Run status group 0 (all jobs): 00:10:42.458 READ: bw=2222KiB/s (2275kB/s), 2222KiB/s-2222KiB/s (2275kB/s-2275kB/s), io=2224KiB (2277kB), run=1001-1001msec 00:10:42.458 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:10:42.458 00:10:42.458 Disk stats (read/write): 00:10:42.458 nvme0n1: ios=562/896, merge=0/0, ticks=504/441, in_queue=945, util=98.10% 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:42.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:42.458 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.459 rmmod nvme_tcp 00:10:42.459 rmmod nvme_fabrics 00:10:42.459 rmmod nvme_keyring 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3978471 ']' 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3978471 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3978471 ']' 00:10:42.459 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3978471 00:10:42.721 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:42.721 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.721 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3978471 00:10:42.721 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.721 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.721 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3978471' 00:10:42.721 killing process with pid 3978471 00:10:42.721 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3978471 00:10:42.721 11:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3978471 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.721 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.266 00:10:45.266 real 0m18.887s 00:10:45.266 user 0m49.459s 00:10:45.266 sys 0m7.193s 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.266 ************************************ 00:10:45.266 END TEST nvmf_nmic 00:10:45.266 ************************************ 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.266 ************************************ 00:10:45.266 START TEST nvmf_fio_target 00:10:45.266 ************************************ 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:45.266 * Looking for test storage... 00:10:45.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:45.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.266 --rc genhtml_branch_coverage=1 00:10:45.266 --rc genhtml_function_coverage=1 00:10:45.266 --rc genhtml_legend=1 00:10:45.266 --rc geninfo_all_blocks=1 00:10:45.266 --rc geninfo_unexecuted_blocks=1 00:10:45.266 00:10:45.266 ' 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:45.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.266 --rc genhtml_branch_coverage=1 00:10:45.266 --rc genhtml_function_coverage=1 00:10:45.266 --rc genhtml_legend=1 00:10:45.266 --rc geninfo_all_blocks=1 00:10:45.266 --rc geninfo_unexecuted_blocks=1 00:10:45.266 00:10:45.266 ' 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:45.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.266 --rc genhtml_branch_coverage=1 00:10:45.266 --rc genhtml_function_coverage=1 00:10:45.266 --rc genhtml_legend=1 00:10:45.266 --rc geninfo_all_blocks=1 00:10:45.266 --rc geninfo_unexecuted_blocks=1 00:10:45.266 00:10:45.266 ' 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:45.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.266 --rc genhtml_branch_coverage=1 00:10:45.266 --rc genhtml_function_coverage=1 00:10:45.266 --rc genhtml_legend=1 00:10:45.266 --rc geninfo_all_blocks=1 00:10:45.266 --rc geninfo_unexecuted_blocks=1 00:10:45.266 00:10:45.266 ' 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:45.266 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.267 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.408 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:53.409 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:53.409 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:53.409 Found net devices under 0000:31:00.0: cvl_0_0 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:53.409 Found net devices under 0000:31:00.1: cvl_0_1 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:10:53.409 00:10:53.409 --- 10.0.0.2 ping statistics --- 00:10:53.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.409 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:10:53.409 00:10:53.409 --- 10.0.0.1 ping statistics --- 00:10:53.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.409 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.409 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3985049 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3985049 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3985049 ']' 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.670 11:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.670 [2024-11-19 11:05:01.862989] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:10:53.670 [2024-11-19 11:05:01.863064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.670 [2024-11-19 11:05:01.955932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.670 [2024-11-19 11:05:01.997367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.671 [2024-11-19 11:05:01.997405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.671 [2024-11-19 11:05:01.997413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.671 [2024-11-19 11:05:01.997420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.671 [2024-11-19 11:05:01.997426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.671 [2024-11-19 11:05:01.999070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.671 [2024-11-19 11:05:01.999243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.671 [2024-11-19 11:05:01.999381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.671 [2024-11-19 11:05:01.999382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.614 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.614 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:54.614 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.614 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.614 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.614 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.614 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:54.614 [2024-11-19 11:05:02.862659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.614 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.876 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:54.876 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.137 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:55.137 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.137 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:55.137 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.397 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:55.397 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:55.657 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.917 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:55.917 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.917 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:55.917 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.177 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:56.177 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:56.438 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.698 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:56.698 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.698 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:56.698 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.958 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.219 [2024-11-19 11:05:05.332035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.219 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:57.219 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:57.480 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.390 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:59.390 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.390 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.390 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:59.390 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:59.390 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:01.318 11:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:01.318 11:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:01.318 11:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.318 11:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:01.318 11:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.318 11:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:01.318 11:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:01.318 [global] 00:11:01.318 thread=1 00:11:01.318 invalidate=1 00:11:01.318 rw=write 00:11:01.318 time_based=1 00:11:01.318 runtime=1 00:11:01.318 ioengine=libaio 00:11:01.318 direct=1 00:11:01.318 bs=4096 00:11:01.318 iodepth=1 00:11:01.318 norandommap=0 00:11:01.318 numjobs=1 00:11:01.318 00:11:01.318 verify_dump=1 00:11:01.318 verify_backlog=512 00:11:01.318 verify_state_save=0 00:11:01.318 do_verify=1 00:11:01.318 verify=crc32c-intel 00:11:01.318 [job0] 00:11:01.318 filename=/dev/nvme0n1 00:11:01.318 [job1] 00:11:01.318 filename=/dev/nvme0n2 00:11:01.318 [job2] 00:11:01.318 filename=/dev/nvme0n3 00:11:01.318 [job3] 00:11:01.318 filename=/dev/nvme0n4 00:11:01.318 Could not set queue depth (nvme0n1) 00:11:01.318 Could not set queue depth (nvme0n2) 00:11:01.318 Could not set queue depth (nvme0n3) 00:11:01.318 Could not set queue depth (nvme0n4) 00:11:01.582 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.582 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.582 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.582 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.582 fio-3.35 00:11:01.582 Starting 4 threads 00:11:02.993 00:11:02.993 job0: (groupid=0, jobs=1): err= 0: pid=3986964: Tue Nov 19 11:05:11 2024 00:11:02.993 read: IOPS=19, BW=77.7KiB/s (79.6kB/s)(80.0KiB/1029msec) 00:11:02.993 slat (nsec): min=24628, max=26422, avg=25846.35, stdev=353.54 00:11:02.993 clat (usec): min=40644, max=42032, avg=41897.57, stdev=297.32 00:11:02.993 lat (usec): min=40670, max=42056, avg=41923.42, stdev=297.27 00:11:02.993 clat percentiles (usec): 00:11:02.993 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41681], 20.00th=[41681], 00:11:02.993 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:02.993 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:02.993 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:02.993 | 99.99th=[42206] 00:11:02.993 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:11:02.993 slat (nsec): min=9374, max=52787, avg=27761.94, stdev=10165.96 00:11:02.993 clat (usec): min=92, max=896, avg=330.68, stdev=79.06 00:11:02.993 lat (usec): min=102, max=928, avg=358.44, stdev=80.62 00:11:02.993 clat percentiles (usec): 00:11:02.993 | 1.00th=[ 135], 5.00th=[ 229], 10.00th=[ 245], 20.00th=[ 265], 00:11:02.993 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 334], 60.00th=[ 355], 00:11:02.993 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 453], 00:11:02.993 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 898], 99.95th=[ 898], 00:11:02.993 | 99.99th=[ 898] 00:11:02.993 bw ( KiB/s): min= 4096, max= 4096, per=35.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:02.993 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:02.993 lat (usec) : 100=0.19%, 250=10.90%, 500=84.21%, 750=0.75%, 1000=0.19% 00:11:02.993 lat (msec) : 50=3.76% 00:11:02.993 cpu : usr=0.68%, sys=1.46%, ctx=536, majf=0, minf=1 00:11:02.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.993 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.993 job1: (groupid=0, jobs=1): err= 0: pid=3986966: Tue Nov 19 11:05:11 2024 00:11:02.993 read: IOPS=501, BW=2008KiB/s (2056kB/s)(2064KiB/1028msec) 00:11:02.993 slat (nsec): min=6896, max=64240, avg=24958.87, stdev=7159.20 00:11:02.993 clat (usec): min=182, max=41056, avg=1072.16, stdev=3525.18 00:11:02.993 lat (usec): min=189, max=41085, avg=1097.11, stdev=3525.18 00:11:02.993 clat percentiles (usec): 00:11:02.993 | 1.00th=[ 441], 5.00th=[ 506], 10.00th=[ 603], 20.00th=[ 652], 00:11:02.993 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 816], 00:11:02.993 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 906], 95.00th=[ 930], 00:11:02.993 | 99.00th=[ 1004], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:02.993 | 99.99th=[41157] 00:11:02.993 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:11:02.993 slat (usec): min=9, max=3102, avg=31.61, stdev=102.46 00:11:02.993 clat (usec): min=131, max=1151, avg=405.47, stdev=129.36 00:11:02.993 lat (usec): min=159, max=3710, avg=437.08, stdev=172.99 00:11:02.993 clat percentiles (usec): 00:11:02.993 | 1.00th=[ 174], 5.00th=[ 219], 10.00th=[ 237], 20.00th=[ 293], 00:11:02.993 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 400], 60.00th=[ 433], 00:11:02.993 | 70.00th=[ 469], 80.00th=[ 510], 90.00th=[ 578], 95.00th=[ 619], 00:11:02.993 | 99.00th=[ 725], 99.50th=[ 758], 99.90th=[ 1090], 99.95th=[ 1156], 00:11:02.993 | 99.99th=[ 1156] 00:11:02.993 bw ( KiB/s): min= 4096, max= 4096, per=35.00%, avg=4096.00, stdev= 0.00, samples=2 00:11:02.993 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:02.993 lat (usec) : 250=8.05%, 500=45.26%, 750=25.97%, 1000=20.19% 00:11:02.993 lat (msec) : 2=0.26%, 50=0.26% 00:11:02.993 cpu : usr=1.46%, sys=4.77%, ctx=1544, majf=0, minf=1 00:11:02.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.993 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.993 job2: (groupid=0, jobs=1): err= 0: pid=3986968: Tue Nov 19 11:05:11 2024 00:11:02.993 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:02.993 slat (nsec): min=6245, max=47932, avg=24988.74, stdev=7386.34 00:11:02.993 clat (usec): min=306, max=1235, avg=911.60, stdev=214.34 00:11:02.993 lat (usec): min=319, max=1263, avg=936.58, stdev=218.18 00:11:02.993 clat percentiles (usec): 00:11:02.993 | 1.00th=[ 404], 5.00th=[ 498], 10.00th=[ 562], 20.00th=[ 676], 00:11:02.993 | 30.00th=[ 807], 40.00th=[ 955], 50.00th=[ 1012], 60.00th=[ 1037], 00:11:02.993 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1139], 00:11:02.993 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:11:02.993 | 99.99th=[ 1237] 00:11:02.993 write: IOPS=997, BW=3988KiB/s (4084kB/s)(3992KiB/1001msec); 0 zone resets 00:11:02.993 slat (nsec): min=8946, max=81402, avg=23266.10, stdev=12926.32 00:11:02.993 clat (usec): min=148, max=970, avg=489.67, stdev=160.35 00:11:02.993 lat (usec): min=157, max=1005, avg=512.93, stdev=169.18 00:11:02.993 clat percentiles (usec): 00:11:02.993 | 1.00th=[ 190], 5.00th=[ 239], 10.00th=[ 269], 20.00th=[ 330], 00:11:02.993 | 30.00th=[ 383], 40.00th=[ 445], 50.00th=[ 494], 60.00th=[ 545], 00:11:02.993 | 70.00th=[ 586], 80.00th=[ 652], 90.00th=[ 709], 95.00th=[ 734], 00:11:02.993 | 99.00th=[ 791], 99.50th=[ 857], 99.90th=[ 971], 99.95th=[ 971], 00:11:02.993 | 99.99th=[ 971] 00:11:02.993 bw ( KiB/s): min= 4087, max= 4087, per=34.92%, avg=4087.00, stdev= 0.00, samples=1 00:11:02.994 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:02.994 lat (usec) : 250=4.90%, 500=30.79%, 750=37.22%, 1000=9.47% 00:11:02.994 lat (msec) : 2=17.62% 00:11:02.994 cpu : usr=3.00%, sys=4.20%, ctx=1511, majf=0, minf=1 00:11:02.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.994 issued rwts: total=512,998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.994 job3: (groupid=0, jobs=1): err= 0: pid=3986973: Tue Nov 19 11:05:11 2024 00:11:02.994 read: IOPS=43, BW=173KiB/s (177kB/s)(180KiB/1041msec) 00:11:02.994 slat (nsec): min=7870, max=27695, avg=26144.76, stdev=3797.71 00:11:02.994 clat (usec): min=634, max=41265, avg=17849.11, stdev=20076.77 00:11:02.994 lat (usec): min=643, max=41273, avg=17875.25, stdev=20076.68 00:11:02.994 clat percentiles (usec): 00:11:02.994 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 807], 20.00th=[ 865], 00:11:02.994 | 30.00th=[ 889], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[40633], 00:11:02.994 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:02.994 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:02.994 | 99.99th=[41157] 00:11:02.994 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:11:02.994 slat (nsec): min=10021, max=71275, avg=30935.96, stdev=10749.49 00:11:02.994 clat (usec): min=193, max=3571, avg=416.14, stdev=230.01 00:11:02.994 lat (usec): min=204, max=3606, avg=447.08, stdev=231.84 00:11:02.994 clat percentiles (usec): 00:11:02.994 | 1.00th=[ 212], 5.00th=[ 237], 10.00th=[ 277], 20.00th=[ 314], 00:11:02.994 | 30.00th=[ 330], 40.00th=[ 351], 50.00th=[ 371], 60.00th=[ 412], 00:11:02.994 | 70.00th=[ 457], 80.00th=[ 506], 90.00th=[ 562], 95.00th=[ 627], 00:11:02.994 | 99.00th=[ 832], 99.50th=[ 1188], 99.90th=[ 3556], 99.95th=[ 3556], 00:11:02.994 | 99.99th=[ 3556] 00:11:02.994 bw ( KiB/s): min= 4096, max= 4096, per=35.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:02.994 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:02.994 lat (usec) : 250=6.64%, 500=66.25%, 750=18.13%, 1000=4.85% 00:11:02.994 lat (msec) : 2=0.36%, 4=0.36%, 50=3.41% 00:11:02.994 cpu : usr=0.96%, sys=1.35%, ctx=559, majf=0, minf=1 00:11:02.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.994 issued rwts: total=45,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.994 00:11:02.994 Run status group 0 (all jobs): 00:11:02.994 READ: bw=4200KiB/s (4301kB/s), 77.7KiB/s-2046KiB/s (79.6kB/s-2095kB/s), io=4372KiB (4477kB), run=1001-1041msec 00:11:02.994 WRITE: bw=11.4MiB/s (12.0MB/s), 1967KiB/s-3988KiB/s (2015kB/s-4084kB/s), io=11.9MiB (12.5MB), run=1001-1041msec 00:11:02.994 00:11:02.994 Disk stats (read/write): 00:11:02.994 nvme0n1: ios=65/512, merge=0/0, ticks=694/166, in_queue=860, util=87.07% 00:11:02.994 nvme0n2: ios=560/948, merge=0/0, ticks=507/386, in_queue=893, util=91.11% 00:11:02.994 nvme0n3: ios=569/735, merge=0/0, ticks=530/290, in_queue=820, util=94.28% 00:11:02.994 nvme0n4: ios=96/512, merge=0/0, ticks=723/205, in_queue=928, util=97.53% 00:11:02.994 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:02.994 [global] 00:11:02.994 thread=1 00:11:02.994 invalidate=1 00:11:02.994 rw=randwrite 00:11:02.994 time_based=1 00:11:02.994 runtime=1 00:11:02.994 ioengine=libaio 00:11:02.994 direct=1 00:11:02.994 bs=4096 00:11:02.994 iodepth=1 00:11:02.994 norandommap=0 00:11:02.994 numjobs=1 00:11:02.994 00:11:02.994 verify_dump=1 00:11:02.994 verify_backlog=512 00:11:02.994 verify_state_save=0 00:11:02.994 do_verify=1 00:11:02.994 verify=crc32c-intel 00:11:02.994 [job0] 00:11:02.994 filename=/dev/nvme0n1 00:11:02.994 [job1] 00:11:02.994 filename=/dev/nvme0n2 00:11:02.994 [job2] 00:11:02.994 filename=/dev/nvme0n3 00:11:02.994 [job3] 00:11:02.994 filename=/dev/nvme0n4 00:11:02.994 Could not set queue depth (nvme0n1) 00:11:02.994 Could not set queue depth (nvme0n2) 00:11:02.994 Could not set queue depth (nvme0n3) 00:11:02.994 Could not set queue depth (nvme0n4) 00:11:03.256 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.256 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.256 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.256 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.256 fio-3.35 00:11:03.256 Starting 4 threads 00:11:04.641 00:11:04.641 job0: (groupid=0, jobs=1): err= 0: pid=3987456: Tue Nov 19 11:05:12 2024 00:11:04.641 read: IOPS=17, BW=71.2KiB/s (72.9kB/s)(72.0KiB/1011msec) 00:11:04.641 slat (nsec): min=10608, max=27564, avg=26417.22, stdev=3948.49 00:11:04.641 clat (usec): min=860, max=42021, avg=38992.47, stdev=9526.29 00:11:04.641 lat (usec): min=871, max=42049, avg=39018.89, stdev=9530.23 00:11:04.641 clat percentiles (usec): 00:11:04.641 | 1.00th=[ 865], 5.00th=[ 865], 10.00th=[40633], 20.00th=[41157], 00:11:04.641 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:04.641 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:04.641 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:04.641 | 99.99th=[42206] 00:11:04.641 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:04.641 slat (nsec): min=9352, max=53412, avg=31473.67, stdev=8822.83 00:11:04.641 clat (usec): min=167, max=914, avg=563.12, stdev=125.00 00:11:04.641 lat (usec): min=177, max=948, avg=594.60, stdev=128.82 00:11:04.641 clat percentiles (usec): 00:11:04.641 | 1.00th=[ 253], 5.00th=[ 306], 10.00th=[ 388], 20.00th=[ 474], 00:11:04.641 | 30.00th=[ 506], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 611], 00:11:04.641 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 709], 95.00th=[ 742], 00:11:04.641 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 914], 99.95th=[ 914], 00:11:04.641 | 99.99th=[ 914] 00:11:04.641 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:11:04.641 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:04.641 lat (usec) : 250=0.94%, 500=25.09%, 750=66.98%, 1000=3.77% 00:11:04.641 lat (msec) : 50=3.21% 00:11:04.641 cpu : usr=1.09%, sys=2.08%, ctx=532, majf=0, minf=1 00:11:04.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.641 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.641 job1: (groupid=0, jobs=1): err= 0: pid=3987471: Tue Nov 19 11:05:12 2024 00:11:04.641 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:11:04.641 slat (nsec): min=26751, max=27244, avg=26896.44, stdev=112.35 00:11:04.641 clat (usec): min=1092, max=42088, avg=39262.98, stdev=9536.25 00:11:04.641 lat (usec): min=1119, max=42115, avg=39289.88, stdev=9536.24 00:11:04.641 clat percentiles (usec): 00:11:04.641 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41157], 20.00th=[41157], 00:11:04.641 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:11:04.641 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:04.641 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:04.641 | 99.99th=[42206] 00:11:04.641 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:11:04.641 slat (nsec): min=3719, max=68550, avg=29562.18, stdev=10458.86 00:11:04.641 clat (usec): min=265, max=1006, avg=603.16, stdev=119.93 00:11:04.641 lat (usec): min=298, max=1039, avg=632.72, stdev=124.06 00:11:04.641 clat percentiles (usec): 00:11:04.641 | 1.00th=[ 334], 5.00th=[ 379], 10.00th=[ 449], 20.00th=[ 498], 00:11:04.641 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 635], 00:11:04.641 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 783], 00:11:04.641 | 99.00th=[ 881], 99.50th=[ 947], 99.90th=[ 1004], 99.95th=[ 1004], 00:11:04.641 | 99.99th=[ 1004] 00:11:04.641 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:11:04.641 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:04.641 lat (usec) : 500=19.62%, 750=67.92%, 1000=8.87% 00:11:04.641 lat (msec) : 2=0.38%, 50=3.21% 00:11:04.641 cpu : usr=0.39%, sys=1.84%, ctx=533, majf=0, minf=1 00:11:04.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.641 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.641 job2: (groupid=0, jobs=1): err= 0: pid=3987490: Tue Nov 19 11:05:12 2024 00:11:04.641 read: IOPS=632, BW=2529KiB/s (2590kB/s)(2532KiB/1001msec) 00:11:04.641 slat (nsec): min=7140, max=84704, avg=25228.00, stdev=7330.49 00:11:04.641 clat (usec): min=322, max=1143, avg=808.70, stdev=111.95 00:11:04.641 lat (usec): min=349, max=1170, avg=833.92, stdev=113.98 00:11:04.641 clat percentiles (usec): 00:11:04.641 | 1.00th=[ 490], 5.00th=[ 611], 10.00th=[ 660], 20.00th=[ 734], 00:11:04.641 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 840], 00:11:04.641 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 938], 95.00th=[ 971], 00:11:04.641 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1139], 99.95th=[ 1139], 00:11:04.641 | 99.99th=[ 1139] 00:11:04.641 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:04.641 slat (nsec): min=9813, max=67774, avg=29189.84, stdev=10137.64 00:11:04.641 clat (usec): min=133, max=3368, avg=420.63, stdev=158.84 00:11:04.641 lat (usec): min=144, max=3380, avg=449.82, stdev=160.45 00:11:04.641 clat percentiles (usec): 00:11:04.641 | 1.00th=[ 192], 5.00th=[ 225], 10.00th=[ 277], 20.00th=[ 310], 00:11:04.641 | 30.00th=[ 334], 40.00th=[ 363], 50.00th=[ 412], 60.00th=[ 445], 00:11:04.641 | 70.00th=[ 474], 80.00th=[ 519], 90.00th=[ 578], 95.00th=[ 644], 00:11:04.641 | 99.00th=[ 791], 99.50th=[ 881], 99.90th=[ 1123], 99.95th=[ 3359], 00:11:04.641 | 99.99th=[ 3359] 00:11:04.641 bw ( KiB/s): min= 4096, max= 4104, per=41.48%, avg=4100.00, stdev= 5.66, samples=2 00:11:04.641 iops : min= 1024, max= 1026, avg=1025.00, stdev= 1.41, samples=2 00:11:04.641 lat (usec) : 250=4.53%, 500=42.25%, 750=23.96%, 1000=27.76% 00:11:04.641 lat (msec) : 2=1.45%, 4=0.06% 00:11:04.641 cpu : usr=1.90%, sys=5.10%, ctx=1659, majf=0, minf=1 00:11:04.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.641 issued rwts: total=633,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.641 job3: (groupid=0, jobs=1): err= 0: pid=3987494: Tue Nov 19 11:05:12 2024 00:11:04.641 read: IOPS=18, BW=74.3KiB/s (76.1kB/s)(76.0KiB/1023msec) 00:11:04.641 slat (nsec): min=26943, max=27662, avg=27316.74, stdev=186.14 00:11:04.641 clat (usec): min=1186, max=42041, avg=39708.90, stdev=9332.61 00:11:04.641 lat (usec): min=1213, max=42068, avg=39736.21, stdev=9332.53 00:11:04.641 clat percentiles (usec): 00:11:04.641 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41157], 20.00th=[41681], 00:11:04.641 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:04.641 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:04.641 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:04.641 | 99.99th=[42206] 00:11:04.641 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:11:04.641 slat (nsec): min=9795, max=52017, avg=21407.95, stdev=11571.09 00:11:04.641 clat (usec): min=155, max=3300, avg=496.31, stdev=178.10 00:11:04.641 lat (usec): min=166, max=3334, avg=517.72, stdev=179.61 00:11:04.641 clat percentiles (usec): 00:11:04.641 | 1.00th=[ 227], 5.00th=[ 273], 10.00th=[ 330], 20.00th=[ 379], 00:11:04.641 | 30.00th=[ 420], 40.00th=[ 469], 50.00th=[ 494], 60.00th=[ 519], 00:11:04.641 | 70.00th=[ 553], 80.00th=[ 594], 90.00th=[ 644], 95.00th=[ 709], 00:11:04.641 | 99.00th=[ 807], 99.50th=[ 857], 99.90th=[ 3294], 99.95th=[ 3294], 00:11:04.641 | 99.99th=[ 3294] 00:11:04.641 bw ( KiB/s): min= 4104, max= 4104, per=41.52%, avg=4104.00, stdev= 0.00, samples=1 00:11:04.641 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:11:04.641 lat (usec) : 250=3.01%, 500=48.59%, 750=41.24%, 1000=3.39% 00:11:04.641 lat (msec) : 2=0.19%, 4=0.19%, 50=3.39% 00:11:04.641 cpu : usr=0.78%, sys=0.78%, ctx=533, majf=0, minf=1 00:11:04.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.641 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.641 00:11:04.641 Run status group 0 (all jobs): 00:11:04.641 READ: bw=2656KiB/s (2720kB/s), 69.5KiB/s-2529KiB/s (71.2kB/s-2590kB/s), io=2752KiB (2818kB), run=1001-1036msec 00:11:04.641 WRITE: bw=9884KiB/s (10.1MB/s), 1977KiB/s-4092KiB/s (2024kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1036msec 00:11:04.641 00:11:04.641 Disk stats (read/write): 00:11:04.641 nvme0n1: ios=68/512, merge=0/0, ticks=569/213, in_queue=782, util=87.58% 00:11:04.641 nvme0n2: ios=55/512, merge=0/0, ticks=601/303, in_queue=904, util=91.54% 00:11:04.641 nvme0n3: ios=569/873, merge=0/0, ticks=646/370, in_queue=1016, util=92.19% 00:11:04.641 nvme0n4: ios=77/512, merge=0/0, ticks=672/240, in_queue=912, util=97.76% 00:11:04.641 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:04.641 [global] 00:11:04.641 thread=1 00:11:04.641 invalidate=1 00:11:04.641 rw=write 00:11:04.641 time_based=1 00:11:04.641 runtime=1 00:11:04.641 ioengine=libaio 00:11:04.641 direct=1 00:11:04.641 bs=4096 00:11:04.641 iodepth=128 00:11:04.641 norandommap=0 00:11:04.641 numjobs=1 00:11:04.641 00:11:04.641 verify_dump=1 00:11:04.641 verify_backlog=512 00:11:04.641 verify_state_save=0 00:11:04.642 do_verify=1 00:11:04.642 verify=crc32c-intel 00:11:04.642 [job0] 00:11:04.642 filename=/dev/nvme0n1 00:11:04.642 [job1] 00:11:04.642 filename=/dev/nvme0n2 00:11:04.642 [job2] 00:11:04.642 filename=/dev/nvme0n3 00:11:04.642 [job3] 00:11:04.642 filename=/dev/nvme0n4 00:11:04.642 Could not set queue depth (nvme0n1) 00:11:04.642 Could not set queue depth (nvme0n2) 00:11:04.642 Could not set queue depth (nvme0n3) 00:11:04.642 Could not set queue depth (nvme0n4) 00:11:04.902 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.902 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.902 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.902 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.902 fio-3.35 00:11:04.902 Starting 4 threads 00:11:06.285 00:11:06.285 job0: (groupid=0, jobs=1): err= 0: pid=3987998: Tue Nov 19 11:05:14 2024 00:11:06.285 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:11:06.285 slat (nsec): min=944, max=6033.3k, avg=76993.81, stdev=436650.44 00:11:06.285 clat (usec): min=4262, max=36722, avg=9972.32, stdev=3637.37 00:11:06.285 lat (usec): min=4269, max=37554, avg=10049.31, stdev=3657.48 00:11:06.285 clat percentiles (usec): 00:11:06.285 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 6915], 20.00th=[ 7504], 00:11:06.285 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:11:06.285 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[14746], 95.00th=[18744], 00:11:06.285 | 99.00th=[21890], 99.50th=[23725], 99.90th=[36439], 99.95th=[36963], 00:11:06.285 | 99.99th=[36963] 00:11:06.285 write: IOPS=6973, BW=27.2MiB/s (28.6MB/s)(27.3MiB/1002msec); 0 zone resets 00:11:06.285 slat (nsec): min=1618, max=6266.3k, avg=64225.11, stdev=368855.53 00:11:06.285 clat (usec): min=1346, max=18659, avg=8650.14, stdev=2368.09 00:11:06.285 lat (usec): min=1357, max=18685, avg=8714.36, stdev=2382.99 00:11:06.285 clat percentiles (usec): 00:11:06.285 | 1.00th=[ 3359], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6587], 00:11:06.285 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8717], 00:11:06.285 | 70.00th=[ 9503], 80.00th=[10683], 90.00th=[11731], 95.00th=[13435], 00:11:06.285 | 99.00th=[14746], 99.50th=[15401], 99.90th=[16319], 99.95th=[16909], 00:11:06.285 | 99.99th=[18744] 00:11:06.285 bw ( KiB/s): min=26208, max=28672, per=31.37%, avg=27440.00, stdev=1742.31, samples=2 00:11:06.285 iops : min= 6552, max= 7168, avg=6860.00, stdev=435.58, samples=2 00:11:06.285 lat (msec) : 2=0.12%, 4=0.62%, 10=71.63%, 20=26.15%, 50=1.48% 00:11:06.285 cpu : usr=5.29%, sys=6.69%, ctx=557, majf=0, minf=1 00:11:06.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:06.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.285 issued rwts: total=6656,6987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.285 job1: (groupid=0, jobs=1): err= 0: pid=3988013: Tue Nov 19 11:05:14 2024 00:11:06.285 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:11:06.285 slat (nsec): min=948, max=33660k, avg=121759.54, stdev=1023360.89 00:11:06.286 clat (usec): min=3308, max=93144, avg=15685.37, stdev=15418.50 00:11:06.286 lat (usec): min=3321, max=93150, avg=15807.13, stdev=15513.73 00:11:06.286 clat percentiles (usec): 00:11:06.286 | 1.00th=[ 4293], 5.00th=[ 5866], 10.00th=[ 7373], 20.00th=[ 8225], 00:11:06.286 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[11600], 00:11:06.286 | 70.00th=[13698], 80.00th=[16188], 90.00th=[31327], 95.00th=[51119], 00:11:06.286 | 99.00th=[86508], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:11:06.286 | 99.99th=[92799] 00:11:06.286 write: IOPS=4607, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1005msec); 0 zone resets 00:11:06.286 slat (nsec): min=1627, max=11225k, avg=88300.56, stdev=521174.31 00:11:06.286 clat (usec): min=3812, max=62795, avg=11812.84, stdev=7349.08 00:11:06.286 lat (usec): min=3831, max=62805, avg=11901.14, stdev=7369.35 00:11:06.286 clat percentiles (usec): 00:11:06.286 | 1.00th=[ 4817], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7373], 00:11:06.286 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[11207], 00:11:06.286 | 70.00th=[12125], 80.00th=[13042], 90.00th=[17957], 95.00th=[23200], 00:11:06.286 | 99.00th=[51643], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:11:06.286 | 99.99th=[62653] 00:11:06.286 bw ( KiB/s): min=11280, max=25584, per=21.07%, avg=18432.00, stdev=10114.46, samples=2 00:11:06.286 iops : min= 2820, max= 6396, avg=4608.00, stdev=2528.61, samples=2 00:11:06.286 lat (msec) : 4=0.56%, 10=50.64%, 20=36.17%, 50=8.96%, 100=3.66% 00:11:06.286 cpu : usr=3.39%, sys=5.28%, ctx=448, majf=0, minf=1 00:11:06.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.286 issued rwts: total=4608,4631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.286 job2: (groupid=0, jobs=1): err= 0: pid=3988023: Tue Nov 19 11:05:14 2024 00:11:06.286 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:11:06.286 slat (nsec): min=991, max=20770k, avg=105802.16, stdev=723302.01 00:11:06.286 clat (usec): min=5850, max=43367, avg=13305.03, stdev=4730.13 00:11:06.286 lat (usec): min=5856, max=43393, avg=13410.83, stdev=4780.08 00:11:06.286 clat percentiles (usec): 00:11:06.286 | 1.00th=[ 6980], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9765], 00:11:06.286 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12256], 60.00th=[12911], 00:11:06.286 | 70.00th=[13698], 80.00th=[15533], 90.00th=[18744], 95.00th=[22676], 00:11:06.286 | 99.00th=[31065], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:11:06.286 | 99.99th=[43254] 00:11:06.286 write: IOPS=4897, BW=19.1MiB/s (20.1MB/s)(19.3MiB/1009msec); 0 zone resets 00:11:06.286 slat (nsec): min=1674, max=8457.5k, avg=96943.00, stdev=538246.47 00:11:06.286 clat (usec): min=1388, max=65222, avg=13440.91, stdev=10136.13 00:11:06.286 lat (usec): min=1399, max=65232, avg=13537.85, stdev=10198.74 00:11:06.286 clat percentiles (usec): 00:11:06.286 | 1.00th=[ 2769], 5.00th=[ 5276], 10.00th=[ 7308], 20.00th=[ 7898], 00:11:06.286 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10945], 00:11:06.286 | 70.00th=[12125], 80.00th=[15664], 90.00th=[23987], 95.00th=[38536], 00:11:06.286 | 99.00th=[58459], 99.50th=[59507], 99.90th=[65274], 99.95th=[65274], 00:11:06.286 | 99.99th=[65274] 00:11:06.286 bw ( KiB/s): min=19064, max=19456, per=22.02%, avg=19260.00, stdev=277.19, samples=2 00:11:06.286 iops : min= 4766, max= 4864, avg=4815.00, stdev=69.30, samples=2 00:11:06.286 lat (msec) : 2=0.15%, 4=0.91%, 10=35.99%, 20=51.92%, 50=9.91% 00:11:06.286 lat (msec) : 100=1.13% 00:11:06.286 cpu : usr=3.37%, sys=6.65%, ctx=457, majf=0, minf=1 00:11:06.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.286 issued rwts: total=4608,4942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.286 job3: (groupid=0, jobs=1): err= 0: pid=3988024: Tue Nov 19 11:05:14 2024 00:11:06.286 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:11:06.286 slat (nsec): min=993, max=9132.7k, avg=76734.65, stdev=523894.68 00:11:06.286 clat (usec): min=1224, max=34212, avg=10572.08, stdev=5011.04 00:11:06.286 lat (usec): min=1231, max=34220, avg=10648.82, stdev=5039.77 00:11:06.286 clat percentiles (usec): 00:11:06.286 | 1.00th=[ 1893], 5.00th=[ 3195], 10.00th=[ 6128], 20.00th=[ 7046], 00:11:06.286 | 30.00th=[ 8225], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10552], 00:11:06.286 | 70.00th=[11338], 80.00th=[12518], 90.00th=[16450], 95.00th=[21890], 00:11:06.286 | 99.00th=[27395], 99.50th=[29492], 99.90th=[32375], 99.95th=[34341], 00:11:06.286 | 99.99th=[34341] 00:11:06.286 write: IOPS=5481, BW=21.4MiB/s (22.5MB/s)(21.5MiB/1004msec); 0 zone resets 00:11:06.286 slat (nsec): min=1636, max=11676k, avg=92532.33, stdev=609340.44 00:11:06.286 clat (usec): min=1269, max=85372, avg=13280.21, stdev=14560.54 00:11:06.286 lat (usec): min=1281, max=85380, avg=13372.74, stdev=14636.86 00:11:06.286 clat percentiles (usec): 00:11:06.286 | 1.00th=[ 2671], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 6456], 00:11:06.286 | 30.00th=[ 7570], 40.00th=[ 8356], 50.00th=[ 9503], 60.00th=[10290], 00:11:06.286 | 70.00th=[11731], 80.00th=[13435], 90.00th=[22676], 95.00th=[50070], 00:11:06.286 | 99.00th=[80217], 99.50th=[82314], 99.90th=[84411], 99.95th=[85459], 00:11:06.286 | 99.99th=[85459] 00:11:06.286 bw ( KiB/s): min=16384, max=26624, per=24.59%, avg=21504.00, stdev=7240.77, samples=2 00:11:06.286 iops : min= 4096, max= 6656, avg=5376.00, stdev=1810.19, samples=2 00:11:06.286 lat (msec) : 2=0.84%, 4=4.05%, 10=50.78%, 20=35.45%, 50=6.28% 00:11:06.286 lat (msec) : 100=2.61% 00:11:06.286 cpu : usr=5.08%, sys=5.58%, ctx=439, majf=0, minf=2 00:11:06.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:06.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.286 issued rwts: total=5120,5503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.286 00:11:06.286 Run status group 0 (all jobs): 00:11:06.286 READ: bw=81.3MiB/s (85.2MB/s), 17.8MiB/s-25.9MiB/s (18.7MB/s-27.2MB/s), io=82.0MiB (86.0MB), run=1002-1009msec 00:11:06.286 WRITE: bw=85.4MiB/s (89.6MB/s), 18.0MiB/s-27.2MiB/s (18.9MB/s-28.6MB/s), io=86.2MiB (90.4MB), run=1002-1009msec 00:11:06.286 00:11:06.286 Disk stats (read/write): 00:11:06.286 nvme0n1: ios=5355/5632, merge=0/0, ticks=18127/16929, in_queue=35056, util=86.77% 00:11:06.286 nvme0n2: ios=4309/4608, merge=0/0, ticks=16034/14615, in_queue=30649, util=90.11% 00:11:06.286 nvme0n3: ios=4153/4608, merge=0/0, ticks=29206/30927, in_queue=60133, util=91.76% 00:11:06.286 nvme0n4: ios=4124/4096, merge=0/0, ticks=26859/50115, in_queue=76974, util=93.27% 00:11:06.286 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:06.286 [global] 00:11:06.286 thread=1 00:11:06.286 invalidate=1 00:11:06.286 rw=randwrite 00:11:06.286 time_based=1 00:11:06.286 runtime=1 00:11:06.286 ioengine=libaio 00:11:06.286 direct=1 00:11:06.286 bs=4096 00:11:06.286 iodepth=128 00:11:06.286 norandommap=0 00:11:06.286 numjobs=1 00:11:06.286 00:11:06.286 verify_dump=1 00:11:06.286 verify_backlog=512 00:11:06.286 verify_state_save=0 00:11:06.286 do_verify=1 00:11:06.286 verify=crc32c-intel 00:11:06.286 [job0] 00:11:06.286 filename=/dev/nvme0n1 00:11:06.286 [job1] 00:11:06.286 filename=/dev/nvme0n2 00:11:06.286 [job2] 00:11:06.286 filename=/dev/nvme0n3 00:11:06.286 [job3] 00:11:06.286 filename=/dev/nvme0n4 00:11:06.286 Could not set queue depth (nvme0n1) 00:11:06.286 Could not set queue depth (nvme0n2) 00:11:06.286 Could not set queue depth (nvme0n3) 00:11:06.286 Could not set queue depth (nvme0n4) 00:11:06.547 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.547 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.547 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.547 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.547 fio-3.35 00:11:06.547 Starting 4 threads 00:11:07.931 00:11:07.931 job0: (groupid=0, jobs=1): err= 0: pid=3988486: Tue Nov 19 11:05:16 2024 00:11:07.931 read: IOPS=7151, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:11:07.931 slat (nsec): min=895, max=20616k, avg=55744.27, stdev=601973.22 00:11:07.931 clat (usec): min=1348, max=43464, avg=7801.63, stdev=5492.86 00:11:07.931 lat (usec): min=1457, max=43490, avg=7857.37, stdev=5549.47 00:11:07.931 clat percentiles (usec): 00:11:07.931 | 1.00th=[ 2245], 5.00th=[ 3359], 10.00th=[ 3982], 20.00th=[ 5014], 00:11:07.931 | 30.00th=[ 5538], 40.00th=[ 5997], 50.00th=[ 6259], 60.00th=[ 6652], 00:11:07.931 | 70.00th=[ 7308], 80.00th=[ 8848], 90.00th=[12649], 95.00th=[22152], 00:11:07.931 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:11:07.931 | 99.99th=[43254] 00:11:07.931 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:11:07.931 slat (nsec): min=1478, max=32356k, avg=58249.07, stdev=578061.44 00:11:07.931 clat (usec): min=364, max=53010, avg=9298.53, stdev=9307.64 00:11:07.931 lat (usec): min=388, max=53019, avg=9356.77, stdev=9357.95 00:11:07.931 clat percentiles (usec): 00:11:07.931 | 1.00th=[ 1369], 5.00th=[ 2933], 10.00th=[ 3458], 20.00th=[ 4359], 00:11:07.931 | 30.00th=[ 5211], 40.00th=[ 5407], 50.00th=[ 5800], 60.00th=[ 6390], 00:11:07.931 | 70.00th=[ 7046], 80.00th=[10421], 90.00th=[24511], 95.00th=[34341], 00:11:07.931 | 99.00th=[45351], 99.50th=[46924], 99.90th=[51643], 99.95th=[53216], 00:11:07.931 | 99.99th=[53216] 00:11:07.931 bw ( KiB/s): min=17456, max=43000, per=37.96%, avg=30228.00, stdev=18062.34, samples=2 00:11:07.931 iops : min= 4364, max=10750, avg=7557.00, stdev=4515.58, samples=2 00:11:07.931 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.12% 00:11:07.931 lat (msec) : 2=1.10%, 4=11.44%, 10=69.22%, 20=8.34%, 50=9.60% 00:11:07.931 lat (msec) : 100=0.09% 00:11:07.931 cpu : usr=5.39%, sys=9.08%, ctx=432, majf=0, minf=1 00:11:07.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:07.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.931 issued rwts: total=7173,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.931 job1: (groupid=0, jobs=1): err= 0: pid=3988502: Tue Nov 19 11:05:16 2024 00:11:07.931 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:11:07.931 slat (nsec): min=929, max=21915k, avg=121264.48, stdev=992810.78 00:11:07.931 clat (usec): min=1855, max=91611, avg=15960.40, stdev=16218.43 00:11:07.931 lat (usec): min=1865, max=91635, avg=16081.66, stdev=16352.58 00:11:07.931 clat percentiles (usec): 00:11:07.931 | 1.00th=[ 3490], 5.00th=[ 3752], 10.00th=[ 4490], 20.00th=[ 5669], 00:11:07.931 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 9110], 60.00th=[13698], 00:11:07.931 | 70.00th=[15401], 80.00th=[19530], 90.00th=[40633], 95.00th=[59507], 00:11:07.931 | 99.00th=[80217], 99.50th=[80217], 99.90th=[84411], 99.95th=[84411], 00:11:07.931 | 99.99th=[91751] 00:11:07.931 write: IOPS=4664, BW=18.2MiB/s (19.1MB/s)(18.4MiB/1010msec); 0 zone resets 00:11:07.931 slat (nsec): min=1558, max=20477k, avg=83472.93, stdev=682001.71 00:11:07.931 clat (usec): min=600, max=111261, avg=13349.54, stdev=16565.85 00:11:07.931 lat (usec): min=974, max=111269, avg=13433.02, stdev=16665.69 00:11:07.931 clat percentiles (msec): 00:11:07.931 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 5], 00:11:07.931 | 30.00th=[ 5], 40.00th=[ 6], 50.00th=[ 8], 60.00th=[ 11], 00:11:07.931 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 26], 95.00th=[ 55], 00:11:07.931 | 99.00th=[ 89], 99.50th=[ 101], 99.90th=[ 112], 99.95th=[ 112], 00:11:07.931 | 99.99th=[ 112] 00:11:07.931 bw ( KiB/s): min=10600, max=26264, per=23.15%, avg=18432.00, stdev=11076.12, samples=2 00:11:07.931 iops : min= 2650, max= 6566, avg=4608.00, stdev=2769.03, samples=2 00:11:07.931 lat (usec) : 750=0.01%, 1000=0.02% 00:11:07.931 lat (msec) : 2=0.12%, 4=7.40%, 10=46.61%, 20=29.64%, 50=9.20% 00:11:07.931 lat (msec) : 100=6.69%, 250=0.31% 00:11:07.931 cpu : usr=3.17%, sys=5.75%, ctx=292, majf=0, minf=3 00:11:07.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:07.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.931 issued rwts: total=4096,4711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.931 job2: (groupid=0, jobs=1): err= 0: pid=3988521: Tue Nov 19 11:05:16 2024 00:11:07.931 read: IOPS=3090, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1014msec) 00:11:07.931 slat (nsec): min=1074, max=13960k, avg=130385.83, stdev=959466.10 00:11:07.931 clat (usec): min=4502, max=48954, avg=16180.68, stdev=6191.54 00:11:07.931 lat (usec): min=4509, max=48961, avg=16311.07, stdev=6271.48 00:11:07.931 clat percentiles (usec): 00:11:07.931 | 1.00th=[ 6587], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[11863], 00:11:07.931 | 30.00th=[12911], 40.00th=[14222], 50.00th=[15401], 60.00th=[16057], 00:11:07.931 | 70.00th=[17171], 80.00th=[21627], 90.00th=[23462], 95.00th=[24511], 00:11:07.931 | 99.00th=[39584], 99.50th=[44303], 99.90th=[49021], 99.95th=[49021], 00:11:07.931 | 99.99th=[49021] 00:11:07.931 write: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec); 0 zone resets 00:11:07.931 slat (nsec): min=1556, max=19434k, avg=156790.15, stdev=869140.65 00:11:07.931 clat (usec): min=1254, max=54490, avg=21745.76, stdev=13729.67 00:11:07.931 lat (usec): min=1265, max=54498, avg=21902.55, stdev=13824.86 00:11:07.931 clat percentiles (usec): 00:11:07.931 | 1.00th=[ 5866], 5.00th=[ 8291], 10.00th=[ 8455], 20.00th=[ 8848], 00:11:07.931 | 30.00th=[ 9896], 40.00th=[12125], 50.00th=[15139], 60.00th=[23462], 00:11:07.931 | 70.00th=[29230], 80.00th=[37487], 90.00th=[41681], 95.00th=[46924], 00:11:07.931 | 99.00th=[52167], 99.50th=[53740], 99.90th=[53740], 99.95th=[54264], 00:11:07.931 | 99.99th=[54264] 00:11:07.931 bw ( KiB/s): min=12208, max=15936, per=17.67%, avg=14072.00, stdev=2636.09, samples=2 00:11:07.931 iops : min= 3052, max= 3984, avg=3518.00, stdev=659.02, samples=2 00:11:07.931 lat (msec) : 2=0.03%, 4=0.09%, 10=22.60%, 20=42.48%, 50=33.06% 00:11:07.931 lat (msec) : 100=1.74% 00:11:07.931 cpu : usr=2.67%, sys=3.95%, ctx=276, majf=0, minf=1 00:11:07.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:07.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.931 issued rwts: total=3134,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.931 job3: (groupid=0, jobs=1): err= 0: pid=3988529: Tue Nov 19 11:05:16 2024 00:11:07.931 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:11:07.931 slat (usec): min=2, max=30020, avg=124.79, stdev=1134.19 00:11:07.931 clat (usec): min=2453, max=92286, avg=16392.42, stdev=15420.64 00:11:07.931 lat (usec): min=2460, max=92311, avg=16517.20, stdev=15553.93 00:11:07.931 clat percentiles (usec): 00:11:07.931 | 1.00th=[ 3785], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6652], 00:11:07.931 | 30.00th=[ 7111], 40.00th=[ 7898], 50.00th=[ 8979], 60.00th=[12518], 00:11:07.931 | 70.00th=[17171], 80.00th=[24249], 90.00th=[36963], 95.00th=[51119], 00:11:07.932 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[85459], 00:11:07.932 | 99.99th=[92799] 00:11:07.932 write: IOPS=4172, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1009msec); 0 zone resets 00:11:07.932 slat (nsec): min=1541, max=23354k, avg=97454.79, stdev=897948.96 00:11:07.932 clat (usec): min=728, max=76845, avg=14455.35, stdev=14321.64 00:11:07.932 lat (usec): min=737, max=76874, avg=14552.81, stdev=14429.01 00:11:07.932 clat percentiles (usec): 00:11:07.932 | 1.00th=[ 1827], 5.00th=[ 3425], 10.00th=[ 4080], 20.00th=[ 5080], 00:11:07.932 | 30.00th=[ 6063], 40.00th=[ 6652], 50.00th=[ 8717], 60.00th=[10421], 00:11:07.932 | 70.00th=[15008], 80.00th=[22676], 90.00th=[39060], 95.00th=[49546], 00:11:07.932 | 99.00th=[61080], 99.50th=[61080], 99.90th=[67634], 99.95th=[71828], 00:11:07.932 | 99.99th=[77071] 00:11:07.932 bw ( KiB/s): min= 5488, max=27280, per=20.58%, avg=16384.00, stdev=15409.27, samples=2 00:11:07.932 iops : min= 1372, max= 6820, avg=4096.00, stdev=3852.32, samples=2 00:11:07.932 lat (usec) : 750=0.02%, 1000=0.18% 00:11:07.932 lat (msec) : 2=0.40%, 4=3.91%, 10=51.22%, 20=21.72%, 50=17.81% 00:11:07.932 lat (msec) : 100=4.74% 00:11:07.932 cpu : usr=3.57%, sys=4.46%, ctx=274, majf=0, minf=1 00:11:07.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:07.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.932 issued rwts: total=4096,4210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.932 00:11:07.932 Run status group 0 (all jobs): 00:11:07.932 READ: bw=71.3MiB/s (74.7MB/s), 12.1MiB/s-27.9MiB/s (12.7MB/s-29.3MB/s), io=72.3MiB (75.8MB), run=1003-1014msec 00:11:07.932 WRITE: bw=77.8MiB/s (81.5MB/s), 13.8MiB/s-29.9MiB/s (14.5MB/s-31.4MB/s), io=78.8MiB (82.7MB), run=1003-1014msec 00:11:07.932 00:11:07.932 Disk stats (read/write): 00:11:07.932 nvme0n1: ios=5294/5632, merge=0/0, ticks=41050/54309, in_queue=95359, util=88.58% 00:11:07.932 nvme0n2: ios=3419/4096, merge=0/0, ticks=29693/33677, in_queue=63370, util=86.72% 00:11:07.932 nvme0n3: ios=2920/3072, merge=0/0, ticks=44271/54726, in_queue=98997, util=90.90% 00:11:07.932 nvme0n4: ios=3584/4096, merge=0/0, ticks=33744/31381, in_queue=65125, util=88.75% 00:11:07.932 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:07.932 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3988585 00:11:07.932 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:07.932 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:07.932 [global] 00:11:07.932 thread=1 00:11:07.932 invalidate=1 00:11:07.932 rw=read 00:11:07.932 time_based=1 00:11:07.932 runtime=10 00:11:07.932 ioengine=libaio 00:11:07.932 direct=1 00:11:07.932 bs=4096 00:11:07.932 iodepth=1 00:11:07.932 norandommap=1 00:11:07.932 numjobs=1 00:11:07.932 00:11:07.932 [job0] 00:11:07.932 filename=/dev/nvme0n1 00:11:07.932 [job1] 00:11:07.932 filename=/dev/nvme0n2 00:11:07.932 [job2] 00:11:07.932 filename=/dev/nvme0n3 00:11:07.932 [job3] 00:11:07.932 filename=/dev/nvme0n4 00:11:07.932 Could not set queue depth (nvme0n1) 00:11:07.932 Could not set queue depth (nvme0n2) 00:11:07.932 Could not set queue depth (nvme0n3) 00:11:07.932 Could not set queue depth (nvme0n4) 00:11:08.193 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.193 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.193 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.193 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.193 fio-3.35 00:11:08.193 Starting 4 threads 00:11:10.742 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:11.005 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11464704, buflen=4096 00:11:11.005 fio: pid=3989013, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:11.005 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:11.269 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.269 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=5451776, buflen=4096 00:11:11.269 fio: pid=3989006, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:11.269 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:11.269 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=13316096, buflen=4096 00:11:11.269 fio: pid=3988977, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:11.269 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.269 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:11.530 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.530 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:11.530 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=10702848, buflen=4096 00:11:11.530 fio: pid=3988987, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:11.530 00:11:11.530 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3988977: Tue Nov 19 11:05:19 2024 00:11:11.530 read: IOPS=1097, BW=4387KiB/s (4493kB/s)(12.7MiB/2964msec) 00:11:11.530 slat (usec): min=6, max=29183, avg=35.41, stdev=532.12 00:11:11.530 clat (usec): min=363, max=45507, avg=863.16, stdev=1061.47 00:11:11.530 lat (usec): min=389, max=45534, avg=898.58, stdev=1186.50 00:11:11.530 clat percentiles (usec): 00:11:11.530 | 1.00th=[ 644], 5.00th=[ 709], 10.00th=[ 742], 20.00th=[ 791], 00:11:11.530 | 30.00th=[ 807], 40.00th=[ 832], 50.00th=[ 840], 60.00th=[ 857], 00:11:11.530 | 70.00th=[ 873], 80.00th=[ 889], 90.00th=[ 914], 95.00th=[ 938], 00:11:11.530 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1074], 99.95th=[41681], 00:11:11.530 | 99.99th=[45351] 00:11:11.530 bw ( KiB/s): min= 4512, max= 4728, per=36.37%, avg=4595.20, stdev=90.44, samples=5 00:11:11.530 iops : min= 1128, max= 1182, avg=1148.80, stdev=22.61, samples=5 00:11:11.530 lat (usec) : 500=0.18%, 750=10.55%, 1000=88.56% 00:11:11.530 lat (msec) : 2=0.62%, 50=0.06% 00:11:11.530 cpu : usr=1.15%, sys=3.00%, ctx=3255, majf=0, minf=1 00:11:11.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.530 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.530 issued rwts: total=3252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.530 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3988987: Tue Nov 19 11:05:19 2024 00:11:11.530 read: IOPS=826, BW=3303KiB/s (3383kB/s)(10.2MiB/3164msec) 00:11:11.530 slat (usec): min=6, max=27583, avg=53.87, stdev=712.55 00:11:11.530 clat (usec): min=499, max=42003, avg=1150.22, stdev=1387.13 00:11:11.530 lat (usec): min=507, max=42031, avg=1201.31, stdev=1552.54 00:11:11.530 clat percentiles (usec): 00:11:11.530 | 1.00th=[ 676], 5.00th=[ 848], 10.00th=[ 955], 20.00th=[ 1037], 00:11:11.530 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:11:11.530 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:11:11.530 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[41681], 99.95th=[41681], 00:11:11.530 | 99.99th=[42206] 00:11:11.530 bw ( KiB/s): min= 2984, max= 3536, per=26.89%, avg=3398.67, stdev=207.96, samples=6 00:11:11.530 iops : min= 746, max= 884, avg=849.67, stdev=51.99, samples=6 00:11:11.530 lat (usec) : 500=0.04%, 750=1.84%, 1000=11.44% 00:11:11.531 lat (msec) : 2=86.42%, 4=0.08%, 10=0.04%, 50=0.11% 00:11:11.531 cpu : usr=0.95%, sys=2.66%, ctx=2619, majf=0, minf=2 00:11:11.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.531 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.531 issued rwts: total=2614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.531 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3989006: Tue Nov 19 11:05:19 2024 00:11:11.531 read: IOPS=479, BW=1915KiB/s (1961kB/s)(5324KiB/2780msec) 00:11:11.531 slat (nsec): min=6871, max=63236, avg=27405.09, stdev=4455.02 00:11:11.531 clat (usec): min=571, max=42091, avg=2037.41, stdev=6253.64 00:11:11.531 lat (usec): min=610, max=42118, avg=2064.83, stdev=6253.51 00:11:11.531 clat percentiles (usec): 00:11:11.531 | 1.00th=[ 840], 5.00th=[ 930], 10.00th=[ 963], 20.00th=[ 996], 00:11:11.531 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:11:11.531 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1205], 00:11:11.531 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:11.531 | 99.99th=[42206] 00:11:11.531 bw ( KiB/s): min= 96, max= 3688, per=16.24%, avg=2052.80, stdev=1825.44, samples=5 00:11:11.531 iops : min= 24, max= 922, avg=513.20, stdev=456.36, samples=5 00:11:11.531 lat (usec) : 750=0.15%, 1000=20.05% 00:11:11.531 lat (msec) : 2=77.33%, 50=2.40% 00:11:11.531 cpu : usr=0.90%, sys=1.91%, ctx=1332, majf=0, minf=2 00:11:11.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.531 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.531 issued rwts: total=1332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.531 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3989013: Tue Nov 19 11:05:19 2024 00:11:11.531 read: IOPS=1086, BW=4345KiB/s (4449kB/s)(10.9MiB/2577msec) 00:11:11.531 slat (nsec): min=6884, max=55914, avg=25578.71, stdev=4191.48 00:11:11.531 clat (usec): min=234, max=1420, avg=885.47, stdev=114.40 00:11:11.531 lat (usec): min=260, max=1448, avg=911.04, stdev=114.79 00:11:11.531 clat percentiles (usec): 00:11:11.531 | 1.00th=[ 570], 5.00th=[ 676], 10.00th=[ 734], 20.00th=[ 791], 00:11:11.531 | 30.00th=[ 832], 40.00th=[ 873], 50.00th=[ 906], 60.00th=[ 930], 00:11:11.531 | 70.00th=[ 955], 80.00th=[ 979], 90.00th=[ 1012], 95.00th=[ 1045], 00:11:11.531 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1221], 99.95th=[ 1401], 00:11:11.531 | 99.99th=[ 1418] 00:11:11.531 bw ( KiB/s): min= 4296, max= 4416, per=34.53%, avg=4363.20, stdev=55.60, samples=5 00:11:11.531 iops : min= 1074, max= 1104, avg=1090.80, stdev=13.90, samples=5 00:11:11.531 lat (usec) : 250=0.04%, 500=0.21%, 750=12.21%, 1000=74.32% 00:11:11.531 lat (msec) : 2=13.18% 00:11:11.531 cpu : usr=1.20%, sys=3.30%, ctx=2800, majf=0, minf=2 00:11:11.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.531 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.531 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.531 00:11:11.531 Run status group 0 (all jobs): 00:11:11.531 READ: bw=12.3MiB/s (12.9MB/s), 1915KiB/s-4387KiB/s (1961kB/s-4493kB/s), io=39.0MiB (40.9MB), run=2577-3164msec 00:11:11.531 00:11:11.531 Disk stats (read/write): 00:11:11.531 nvme0n1: ios=3229/0, merge=0/0, ticks=2637/0, in_queue=2637, util=93.49% 00:11:11.531 nvme0n2: ios=2643/0, merge=0/0, ticks=2925/0, in_queue=2925, util=94.39% 00:11:11.531 nvme0n3: ios=1326/0, merge=0/0, ticks=2390/0, in_queue=2390, util=95.99% 00:11:11.531 nvme0n4: ios=2550/0, merge=0/0, ticks=2172/0, in_queue=2172, util=96.06% 00:11:11.791 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.791 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:12.052 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.052 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:12.052 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.052 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:12.313 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.313 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3988585 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:12.574 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:12.575 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:12.575 nvmf hotplug test: fio failed as expected 00:11:12.575 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.836 rmmod nvme_tcp 00:11:12.836 rmmod nvme_fabrics 00:11:12.836 rmmod nvme_keyring 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3985049 ']' 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3985049 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3985049 ']' 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3985049 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3985049 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3985049' 00:11:12.836 killing process with pid 3985049 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3985049 00:11:12.836 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3985049 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.098 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.644 00:11:15.644 real 0m30.199s 00:11:15.644 user 2m29.529s 00:11:15.644 sys 0m10.485s 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.644 ************************************ 00:11:15.644 END TEST nvmf_fio_target 00:11:15.644 ************************************ 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:15.644 ************************************ 00:11:15.644 START TEST nvmf_bdevio 00:11:15.644 ************************************ 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:15.644 * Looking for test storage... 00:11:15.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.644 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.645 --rc genhtml_branch_coverage=1 00:11:15.645 --rc genhtml_function_coverage=1 00:11:15.645 --rc genhtml_legend=1 00:11:15.645 --rc geninfo_all_blocks=1 00:11:15.645 --rc geninfo_unexecuted_blocks=1 00:11:15.645 00:11:15.645 ' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.645 --rc genhtml_branch_coverage=1 00:11:15.645 --rc genhtml_function_coverage=1 00:11:15.645 --rc genhtml_legend=1 00:11:15.645 --rc geninfo_all_blocks=1 00:11:15.645 --rc geninfo_unexecuted_blocks=1 00:11:15.645 00:11:15.645 ' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.645 --rc genhtml_branch_coverage=1 00:11:15.645 --rc genhtml_function_coverage=1 00:11:15.645 --rc genhtml_legend=1 00:11:15.645 --rc geninfo_all_blocks=1 00:11:15.645 --rc geninfo_unexecuted_blocks=1 00:11:15.645 00:11:15.645 ' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.645 --rc genhtml_branch_coverage=1 00:11:15.645 --rc genhtml_function_coverage=1 00:11:15.645 --rc genhtml_legend=1 00:11:15.645 --rc geninfo_all_blocks=1 00:11:15.645 --rc geninfo_unexecuted_blocks=1 00:11:15.645 00:11:15.645 ' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.645 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.646 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.806 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:23.807 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:23.807 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:23.807 Found net devices under 0000:31:00.0: cvl_0_0 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:23.807 Found net devices under 0000:31:00.1: cvl_0_1 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.807 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.808 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:11:23.808 00:11:23.808 --- 10.0.0.2 ping statistics --- 00:11:23.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.808 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:11:23.808 00:11:23.808 --- 10.0.0.1 ping statistics --- 00:11:23.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.808 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3994719 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3994719 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3994719 ']' 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.808 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.808 [2024-11-19 11:05:32.129043] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:11:23.808 [2024-11-19 11:05:32.129111] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.105 [2024-11-19 11:05:32.236929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.105 [2024-11-19 11:05:32.287034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.105 [2024-11-19 11:05:32.287088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.105 [2024-11-19 11:05:32.287097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.105 [2024-11-19 11:05:32.287104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.105 [2024-11-19 11:05:32.287110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.105 [2024-11-19 11:05:32.289139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:24.105 [2024-11-19 11:05:32.289305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:24.105 [2024-11-19 11:05:32.289464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.105 [2024-11-19 11:05:32.289465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:24.729 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.729 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:24.729 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.729 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.729 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.729 [2024-11-19 11:05:33.015707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.729 Malloc0 00:11:24.729 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.730 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.730 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.730 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.730 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.730 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.730 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.730 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.990 [2024-11-19 11:05:33.089204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.990 { 00:11:24.990 "params": { 00:11:24.990 "name": "Nvme$subsystem", 00:11:24.990 "trtype": "$TEST_TRANSPORT", 00:11:24.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.990 "adrfam": "ipv4", 00:11:24.990 "trsvcid": "$NVMF_PORT", 00:11:24.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.990 "hdgst": ${hdgst:-false}, 00:11:24.990 "ddgst": ${ddgst:-false} 00:11:24.990 }, 00:11:24.990 "method": "bdev_nvme_attach_controller" 00:11:24.990 } 00:11:24.990 EOF 00:11:24.990 )") 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:24.990 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.990 "params": { 00:11:24.990 "name": "Nvme1", 00:11:24.990 "trtype": "tcp", 00:11:24.991 "traddr": "10.0.0.2", 00:11:24.991 "adrfam": "ipv4", 00:11:24.991 "trsvcid": "4420", 00:11:24.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.991 "hdgst": false, 00:11:24.991 "ddgst": false 00:11:24.991 }, 00:11:24.991 "method": "bdev_nvme_attach_controller" 00:11:24.991 }' 00:11:24.991 [2024-11-19 11:05:33.148765] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:11:24.991 [2024-11-19 11:05:33.148833] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3994830 ] 00:11:24.991 [2024-11-19 11:05:33.234886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:24.991 [2024-11-19 11:05:33.279290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.991 [2024-11-19 11:05:33.279409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.991 [2024-11-19 11:05:33.279412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.252 I/O targets: 00:11:25.252 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:25.252 00:11:25.252 00:11:25.252 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.252 http://cunit.sourceforge.net/ 00:11:25.252 00:11:25.252 00:11:25.252 Suite: bdevio tests on: Nvme1n1 00:11:25.252 Test: blockdev write read block ...passed 00:11:25.513 Test: blockdev write zeroes read block ...passed 00:11:25.513 Test: blockdev write zeroes read no split ...passed 00:11:25.513 Test: blockdev write zeroes read split ...passed 00:11:25.513 Test: blockdev write zeroes read split partial ...passed 00:11:25.513 Test: blockdev reset ...[2024-11-19 11:05:33.721380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:25.513 [2024-11-19 11:05:33.721446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104a4b0 (9): Bad file descriptor 00:11:25.513 [2024-11-19 11:05:33.779466] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:25.513 passed 00:11:25.513 Test: blockdev write read 8 blocks ...passed 00:11:25.513 Test: blockdev write read size > 128k ...passed 00:11:25.513 Test: blockdev write read invalid size ...passed 00:11:25.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.514 Test: blockdev write read max offset ...passed 00:11:25.774 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.774 Test: blockdev writev readv 8 blocks ...passed 00:11:25.774 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.774 Test: blockdev writev readv block ...passed 00:11:25.774 Test: blockdev writev readv size > 128k ...passed 00:11:25.774 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.774 Test: blockdev comparev and writev ...[2024-11-19 11:05:34.003343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.774 [2024-11-19 11:05:34.003368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.003379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.774 [2024-11-19 11:05:34.003385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.003886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.774 [2024-11-19 11:05:34.003895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.003905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.774 [2024-11-19 11:05:34.003910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.004388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.774 [2024-11-19 11:05:34.004396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.004406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.774 [2024-11-19 11:05:34.004411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.004873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.774 [2024-11-19 11:05:34.004882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.004891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.774 [2024-11-19 11:05:34.004897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:25.774 passed 00:11:25.774 Test: blockdev nvme passthru rw ...passed 00:11:25.774 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:05:34.088748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.774 [2024-11-19 11:05:34.088761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.089087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.774 [2024-11-19 11:05:34.089094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:25.774 [2024-11-19 11:05:34.089448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.774 [2024-11-19 11:05:34.089456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:25.775 [2024-11-19 11:05:34.089772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.775 [2024-11-19 11:05:34.089779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:25.775 passed 00:11:25.775 Test: blockdev nvme admin passthru ...passed 00:11:26.035 Test: blockdev copy ...passed 00:11:26.035 00:11:26.035 Run Summary: Type Total Ran Passed Failed Inactive 00:11:26.035 suites 1 1 n/a 0 0 00:11:26.035 tests 23 23 23 0 0 00:11:26.035 asserts 152 152 152 0 n/a 00:11:26.035 00:11:26.036 Elapsed time = 1.199 seconds 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.036 rmmod nvme_tcp 00:11:26.036 rmmod nvme_fabrics 00:11:26.036 rmmod nvme_keyring 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3994719 ']' 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3994719 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3994719 ']' 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3994719 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.036 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3994719 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3994719' 00:11:26.296 killing process with pid 3994719 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3994719 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3994719 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.296 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.842 00:11:28.842 real 0m13.213s 00:11:28.842 user 0m13.878s 00:11:28.842 sys 0m6.917s 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.842 ************************************ 00:11:28.842 END TEST nvmf_bdevio 00:11:28.842 ************************************ 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:28.842 00:11:28.842 real 5m15.759s 00:11:28.842 user 11m48.430s 00:11:28.842 sys 1m59.525s 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:28.842 ************************************ 00:11:28.842 END TEST nvmf_target_core 00:11:28.842 ************************************ 00:11:28.842 11:05:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:28.842 11:05:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.842 11:05:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.842 11:05:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.842 ************************************ 00:11:28.842 START TEST nvmf_target_extra 00:11:28.842 ************************************ 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:28.842 * Looking for test storage... 00:11:28.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:28.842 11:05:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:28.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.842 --rc genhtml_branch_coverage=1 00:11:28.842 --rc genhtml_function_coverage=1 00:11:28.842 --rc genhtml_legend=1 00:11:28.842 --rc geninfo_all_blocks=1 00:11:28.842 --rc geninfo_unexecuted_blocks=1 00:11:28.842 00:11:28.842 ' 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:28.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.842 --rc genhtml_branch_coverage=1 00:11:28.842 --rc genhtml_function_coverage=1 00:11:28.842 --rc genhtml_legend=1 00:11:28.842 --rc geninfo_all_blocks=1 00:11:28.842 --rc geninfo_unexecuted_blocks=1 00:11:28.842 00:11:28.842 ' 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:28.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.842 --rc genhtml_branch_coverage=1 00:11:28.842 --rc genhtml_function_coverage=1 00:11:28.842 --rc genhtml_legend=1 00:11:28.842 --rc geninfo_all_blocks=1 00:11:28.842 --rc geninfo_unexecuted_blocks=1 00:11:28.842 00:11:28.842 ' 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:28.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.842 --rc genhtml_branch_coverage=1 00:11:28.842 --rc genhtml_function_coverage=1 00:11:28.842 --rc genhtml_legend=1 00:11:28.842 --rc geninfo_all_blocks=1 00:11:28.842 --rc geninfo_unexecuted_blocks=1 00:11:28.842 00:11:28.842 ' 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.842 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.843 ************************************ 00:11:28.843 START TEST nvmf_example 00:11:28.843 ************************************ 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:28.843 * Looking for test storage... 00:11:28.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:28.843 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.106 --rc genhtml_branch_coverage=1 00:11:29.106 --rc genhtml_function_coverage=1 00:11:29.106 --rc genhtml_legend=1 00:11:29.106 --rc geninfo_all_blocks=1 00:11:29.106 --rc geninfo_unexecuted_blocks=1 00:11:29.106 00:11:29.106 ' 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.106 --rc genhtml_branch_coverage=1 00:11:29.106 --rc genhtml_function_coverage=1 00:11:29.106 --rc genhtml_legend=1 00:11:29.106 --rc geninfo_all_blocks=1 00:11:29.106 --rc geninfo_unexecuted_blocks=1 00:11:29.106 00:11:29.106 ' 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.106 --rc genhtml_branch_coverage=1 00:11:29.106 --rc genhtml_function_coverage=1 00:11:29.106 --rc genhtml_legend=1 00:11:29.106 --rc geninfo_all_blocks=1 00:11:29.106 --rc geninfo_unexecuted_blocks=1 00:11:29.106 00:11:29.106 ' 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.106 --rc genhtml_branch_coverage=1 00:11:29.106 --rc genhtml_function_coverage=1 00:11:29.106 --rc genhtml_legend=1 00:11:29.106 --rc geninfo_all_blocks=1 00:11:29.106 --rc geninfo_unexecuted_blocks=1 00:11:29.106 00:11:29.106 ' 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.106 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.107 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:37.258 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:37.258 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:37.258 Found net devices under 0000:31:00.0: cvl_0_0 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:37.258 Found net devices under 0000:31:00.1: cvl_0_1 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.258 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.259 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:11:37.521 00:11:37.521 --- 10.0.0.2 ping statistics --- 00:11:37.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.521 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:11:37.521 00:11:37.521 --- 10.0.0.1 ping statistics --- 00:11:37.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.521 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3999949 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3999949 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3999949 ']' 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.521 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:38.495 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:50.726 Initializing NVMe Controllers 00:11:50.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:50.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:50.726 Initialization complete. Launching workers. 00:11:50.726 ======================================================== 00:11:50.726 Latency(us) 00:11:50.726 Device Information : IOPS MiB/s Average min max 00:11:50.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18657.90 72.88 3431.53 690.83 15305.61 00:11:50.726 ======================================================== 00:11:50.726 Total : 18657.90 72.88 3431.53 690.83 15305.61 00:11:50.726 00:11:50.726 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:50.726 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:50.726 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.726 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:50.726 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.726 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:50.726 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.727 rmmod nvme_tcp 00:11:50.727 rmmod nvme_fabrics 00:11:50.727 rmmod nvme_keyring 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3999949 ']' 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3999949 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3999949 ']' 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3999949 00:11:50.727 11:05:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3999949 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3999949' 00:11:50.727 killing process with pid 3999949 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3999949 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3999949 00:11:50.727 nvmf threads initialize successfully 00:11:50.727 bdev subsystem init successfully 00:11:50.727 created a nvmf target service 00:11:50.727 create targets's poll groups done 00:11:50.727 all subsystems of target started 00:11:50.727 nvmf target is running 00:11:50.727 all subsystems of target stopped 00:11:50.727 destroy targets's poll groups done 00:11:50.727 destroyed the nvmf target service 00:11:50.727 bdev subsystem finish successfully 00:11:50.727 nvmf threads destroy successfully 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.727 11:05:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.987 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.987 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:50.987 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.987 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.987 00:11:50.987 real 0m22.246s 00:11:50.987 user 0m46.923s 00:11:50.987 sys 0m7.544s 00:11:50.987 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.987 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.987 ************************************ 00:11:50.987 END TEST nvmf_example 00:11:50.987 ************************************ 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.249 ************************************ 00:11:51.249 START TEST nvmf_filesystem 00:11:51.249 ************************************ 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:51.249 * Looking for test storage... 00:11:51.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:51.249 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.514 --rc genhtml_branch_coverage=1 00:11:51.514 --rc genhtml_function_coverage=1 00:11:51.514 --rc genhtml_legend=1 00:11:51.514 --rc geninfo_all_blocks=1 00:11:51.514 --rc geninfo_unexecuted_blocks=1 00:11:51.514 00:11:51.514 ' 00:11:51.514 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.514 --rc genhtml_branch_coverage=1 00:11:51.514 --rc genhtml_function_coverage=1 00:11:51.514 --rc genhtml_legend=1 00:11:51.515 --rc geninfo_all_blocks=1 00:11:51.515 --rc geninfo_unexecuted_blocks=1 00:11:51.515 00:11:51.515 ' 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.515 --rc genhtml_branch_coverage=1 00:11:51.515 --rc genhtml_function_coverage=1 00:11:51.515 --rc genhtml_legend=1 00:11:51.515 --rc geninfo_all_blocks=1 00:11:51.515 --rc geninfo_unexecuted_blocks=1 00:11:51.515 00:11:51.515 ' 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.515 --rc genhtml_branch_coverage=1 00:11:51.515 --rc genhtml_function_coverage=1 00:11:51.515 --rc genhtml_legend=1 00:11:51.515 --rc geninfo_all_blocks=1 00:11:51.515 --rc geninfo_unexecuted_blocks=1 00:11:51.515 00:11:51.515 ' 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:51.515 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:51.516 #define SPDK_CONFIG_H 00:11:51.516 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:51.516 #define SPDK_CONFIG_APPS 1 00:11:51.516 #define SPDK_CONFIG_ARCH native 00:11:51.516 #undef SPDK_CONFIG_ASAN 00:11:51.516 #undef SPDK_CONFIG_AVAHI 00:11:51.516 #undef SPDK_CONFIG_CET 00:11:51.516 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:51.516 #define SPDK_CONFIG_COVERAGE 1 00:11:51.516 #define SPDK_CONFIG_CROSS_PREFIX 00:11:51.516 #undef SPDK_CONFIG_CRYPTO 00:11:51.516 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:51.516 #undef SPDK_CONFIG_CUSTOMOCF 00:11:51.516 #undef SPDK_CONFIG_DAOS 00:11:51.516 #define SPDK_CONFIG_DAOS_DIR 00:11:51.516 #define SPDK_CONFIG_DEBUG 1 00:11:51.516 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:51.516 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:51.516 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:51.516 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:51.516 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:51.516 #undef SPDK_CONFIG_DPDK_UADK 00:11:51.516 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:51.516 #define SPDK_CONFIG_EXAMPLES 1 00:11:51.516 #undef SPDK_CONFIG_FC 00:11:51.516 #define SPDK_CONFIG_FC_PATH 00:11:51.516 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:51.516 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:51.516 #define SPDK_CONFIG_FSDEV 1 00:11:51.516 #undef SPDK_CONFIG_FUSE 00:11:51.516 #undef SPDK_CONFIG_FUZZER 00:11:51.516 #define SPDK_CONFIG_FUZZER_LIB 00:11:51.516 #undef SPDK_CONFIG_GOLANG 00:11:51.516 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:51.516 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:51.516 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:51.516 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:51.516 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:51.516 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:51.516 #undef SPDK_CONFIG_HAVE_LZ4 00:11:51.516 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:51.516 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:51.516 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:51.516 #define SPDK_CONFIG_IDXD 1 00:11:51.516 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:51.516 #undef SPDK_CONFIG_IPSEC_MB 00:11:51.516 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:51.516 #define SPDK_CONFIG_ISAL 1 00:11:51.516 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:51.516 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:51.516 #define SPDK_CONFIG_LIBDIR 00:11:51.516 #undef SPDK_CONFIG_LTO 00:11:51.516 #define SPDK_CONFIG_MAX_LCORES 128 00:11:51.516 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:51.516 #define SPDK_CONFIG_NVME_CUSE 1 00:11:51.516 #undef SPDK_CONFIG_OCF 00:11:51.516 #define SPDK_CONFIG_OCF_PATH 00:11:51.516 #define SPDK_CONFIG_OPENSSL_PATH 00:11:51.516 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:51.516 #define SPDK_CONFIG_PGO_DIR 00:11:51.516 #undef SPDK_CONFIG_PGO_USE 00:11:51.516 #define SPDK_CONFIG_PREFIX /usr/local 00:11:51.516 #undef SPDK_CONFIG_RAID5F 00:11:51.516 #undef SPDK_CONFIG_RBD 00:11:51.516 #define SPDK_CONFIG_RDMA 1 00:11:51.516 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:51.516 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:51.516 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:51.516 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:51.516 #define SPDK_CONFIG_SHARED 1 00:11:51.516 #undef SPDK_CONFIG_SMA 00:11:51.516 #define SPDK_CONFIG_TESTS 1 00:11:51.516 #undef SPDK_CONFIG_TSAN 00:11:51.516 #define SPDK_CONFIG_UBLK 1 00:11:51.516 #define SPDK_CONFIG_UBSAN 1 00:11:51.516 #undef SPDK_CONFIG_UNIT_TESTS 00:11:51.516 #undef SPDK_CONFIG_URING 00:11:51.516 #define SPDK_CONFIG_URING_PATH 00:11:51.516 #undef SPDK_CONFIG_URING_ZNS 00:11:51.516 #undef SPDK_CONFIG_USDT 00:11:51.516 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:51.516 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:51.516 #define SPDK_CONFIG_VFIO_USER 1 00:11:51.516 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:51.516 #define SPDK_CONFIG_VHOST 1 00:11:51.516 #define SPDK_CONFIG_VIRTIO 1 00:11:51.516 #undef SPDK_CONFIG_VTUNE 00:11:51.516 #define SPDK_CONFIG_VTUNE_DIR 00:11:51.516 #define SPDK_CONFIG_WERROR 1 00:11:51.516 #define SPDK_CONFIG_WPDK_DIR 00:11:51.516 #undef SPDK_CONFIG_XNVME 00:11:51.516 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.516 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:51.517 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:51.518 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 4002806 ]] 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 4002806 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.FKrulx 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FKrulx/tests/target /tmp/spdk.FKrulx 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.519 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122359910400 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6996639744 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666906624 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847697408 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23613440 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677691392 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=585728 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:51.520 * Looking for test storage... 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122359910400 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9211232256 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:51.520 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.521 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.784 --rc genhtml_branch_coverage=1 00:11:51.784 --rc genhtml_function_coverage=1 00:11:51.784 --rc genhtml_legend=1 00:11:51.784 --rc geninfo_all_blocks=1 00:11:51.784 --rc geninfo_unexecuted_blocks=1 00:11:51.784 00:11:51.784 ' 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.784 --rc genhtml_branch_coverage=1 00:11:51.784 --rc genhtml_function_coverage=1 00:11:51.784 --rc genhtml_legend=1 00:11:51.784 --rc geninfo_all_blocks=1 00:11:51.784 --rc geninfo_unexecuted_blocks=1 00:11:51.784 00:11:51.784 ' 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.784 --rc genhtml_branch_coverage=1 00:11:51.784 --rc genhtml_function_coverage=1 00:11:51.784 --rc genhtml_legend=1 00:11:51.784 --rc geninfo_all_blocks=1 00:11:51.784 --rc geninfo_unexecuted_blocks=1 00:11:51.784 00:11:51.784 ' 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.784 --rc genhtml_branch_coverage=1 00:11:51.784 --rc genhtml_function_coverage=1 00:11:51.784 --rc genhtml_legend=1 00:11:51.784 --rc geninfo_all_blocks=1 00:11:51.784 --rc geninfo_unexecuted_blocks=1 00:11:51.784 00:11:51.784 ' 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.784 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:51.785 11:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:59.932 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:59.932 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:59.932 Found net devices under 0000:31:00.0: cvl_0_0 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.932 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:59.932 Found net devices under 0000:31:00.1: cvl_0_1 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.933 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.194 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.194 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.194 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.194 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:12:00.195 00:12:00.195 --- 10.0.0.2 ping statistics --- 00:12:00.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.195 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:12:00.195 00:12:00.195 --- 10.0.0.1 ping statistics --- 00:12:00.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.195 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.195 ************************************ 00:12:00.195 START TEST nvmf_filesystem_no_in_capsule 00:12:00.195 ************************************ 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4007470 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4007470 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4007470 ']' 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.195 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.456 [2024-11-19 11:06:08.592024] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:12:00.456 [2024-11-19 11:06:08.592087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.456 [2024-11-19 11:06:08.686579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.456 [2024-11-19 11:06:08.729151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.456 [2024-11-19 11:06:08.729191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.456 [2024-11-19 11:06:08.729200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.456 [2024-11-19 11:06:08.729207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.456 [2024-11-19 11:06:08.729213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.456 [2024-11-19 11:06:08.730960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.457 [2024-11-19 11:06:08.731067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.457 [2024-11-19 11:06:08.731224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.457 [2024-11-19 11:06:08.731225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.402 [2024-11-19 11:06:09.446024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.402 Malloc1 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.402 [2024-11-19 11:06:09.580623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:01.402 { 00:12:01.402 "name": "Malloc1", 00:12:01.402 "aliases": [ 00:12:01.402 "1dd21134-580e-4296-b453-b1e8c7afcaa8" 00:12:01.402 ], 00:12:01.402 "product_name": "Malloc disk", 00:12:01.402 "block_size": 512, 00:12:01.402 "num_blocks": 1048576, 00:12:01.402 "uuid": "1dd21134-580e-4296-b453-b1e8c7afcaa8", 00:12:01.402 "assigned_rate_limits": { 00:12:01.402 "rw_ios_per_sec": 0, 00:12:01.402 "rw_mbytes_per_sec": 0, 00:12:01.402 "r_mbytes_per_sec": 0, 00:12:01.402 "w_mbytes_per_sec": 0 00:12:01.402 }, 00:12:01.402 "claimed": true, 00:12:01.402 "claim_type": "exclusive_write", 00:12:01.402 "zoned": false, 00:12:01.402 "supported_io_types": { 00:12:01.402 "read": true, 00:12:01.402 "write": true, 00:12:01.402 "unmap": true, 00:12:01.402 "flush": true, 00:12:01.402 "reset": true, 00:12:01.402 "nvme_admin": false, 00:12:01.402 "nvme_io": false, 00:12:01.402 "nvme_io_md": false, 00:12:01.402 "write_zeroes": true, 00:12:01.402 "zcopy": true, 00:12:01.402 "get_zone_info": false, 00:12:01.402 "zone_management": false, 00:12:01.402 "zone_append": false, 00:12:01.402 "compare": false, 00:12:01.402 "compare_and_write": false, 00:12:01.402 "abort": true, 00:12:01.402 "seek_hole": false, 00:12:01.402 "seek_data": false, 00:12:01.402 "copy": true, 00:12:01.402 "nvme_iov_md": false 00:12:01.402 }, 00:12:01.402 "memory_domains": [ 00:12:01.402 { 00:12:01.402 "dma_device_id": "system", 00:12:01.402 "dma_device_type": 1 00:12:01.402 }, 00:12:01.402 { 00:12:01.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.402 "dma_device_type": 2 00:12:01.402 } 00:12:01.402 ], 00:12:01.402 "driver_specific": {} 00:12:01.402 } 00:12:01.402 ]' 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:01.402 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.316 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.316 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.316 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.316 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.316 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:05.230 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:05.491 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:06.062 11:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.004 ************************************ 00:12:07.004 START TEST filesystem_ext4 00:12:07.004 ************************************ 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:07.004 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:07.004 mke2fs 1.47.0 (5-Feb-2023) 00:12:07.004 Discarding device blocks: 0/522240 done 00:12:07.004 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:07.004 Filesystem UUID: e7bafce2-5e95-4cd4-8b3b-8a8d17aa34dd 00:12:07.004 Superblock backups stored on blocks: 00:12:07.004 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:07.004 00:12:07.004 Allocating group tables: 0/64 done 00:12:07.004 Writing inode tables: 0/64 done 00:12:07.264 Creating journal (8192 blocks): done 00:12:07.264 Writing superblocks and filesystem accounting information: 0/64 done 00:12:07.265 00:12:07.265 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:07.265 11:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.547 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.808 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:12.808 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.809 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:12.809 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:12.809 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.809 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4007470 00:12:12.809 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.809 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.809 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.809 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.809 00:12:12.809 real 0m5.766s 00:12:12.809 user 0m0.039s 00:12:12.809 sys 0m0.066s 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:12.809 ************************************ 00:12:12.809 END TEST filesystem_ext4 00:12:12.809 ************************************ 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.809 ************************************ 00:12:12.809 START TEST filesystem_btrfs 00:12:12.809 ************************************ 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:12.809 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:13.070 btrfs-progs v6.8.1 00:12:13.070 See https://btrfs.readthedocs.io for more information. 00:12:13.070 00:12:13.070 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:13.070 NOTE: several default settings have changed in version 5.15, please make sure 00:12:13.070 this does not affect your deployments: 00:12:13.070 - DUP for metadata (-m dup) 00:12:13.070 - enabled no-holes (-O no-holes) 00:12:13.070 - enabled free-space-tree (-R free-space-tree) 00:12:13.070 00:12:13.070 Label: (null) 00:12:13.070 UUID: 02ed5fdd-5e0f-4f85-89bf-f7c1fade5308 00:12:13.070 Node size: 16384 00:12:13.070 Sector size: 4096 (CPU page size: 4096) 00:12:13.070 Filesystem size: 510.00MiB 00:12:13.070 Block group profiles: 00:12:13.070 Data: single 8.00MiB 00:12:13.070 Metadata: DUP 32.00MiB 00:12:13.070 System: DUP 8.00MiB 00:12:13.070 SSD detected: yes 00:12:13.070 Zoned device: no 00:12:13.070 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:13.070 Checksum: crc32c 00:12:13.070 Number of devices: 1 00:12:13.070 Devices: 00:12:13.070 ID SIZE PATH 00:12:13.070 1 510.00MiB /dev/nvme0n1p1 00:12:13.070 00:12:13.070 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:13.070 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4007470 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:13.643 00:12:13.643 real 0m0.674s 00:12:13.643 user 0m0.021s 00:12:13.643 sys 0m0.133s 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:13.643 ************************************ 00:12:13.643 END TEST filesystem_btrfs 00:12:13.643 ************************************ 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.643 ************************************ 00:12:13.643 START TEST filesystem_xfs 00:12:13.643 ************************************ 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:13.643 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:13.643 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:13.643 = sectsz=512 attr=2, projid32bit=1 00:12:13.643 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:13.643 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:13.643 data = bsize=4096 blocks=130560, imaxpct=25 00:12:13.643 = sunit=0 swidth=0 blks 00:12:13.643 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:13.643 log =internal log bsize=4096 blocks=16384, version=2 00:12:13.643 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:13.643 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:15.031 Discarding blocks...Done. 00:12:15.031 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:15.031 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4007470 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.948 00:12:16.948 real 0m3.287s 00:12:16.948 user 0m0.023s 00:12:16.948 sys 0m0.085s 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:16.948 ************************************ 00:12:16.948 END TEST filesystem_xfs 00:12:16.948 ************************************ 00:12:16.948 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:17.208 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.779 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4007470 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4007470 ']' 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4007470 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4007470 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4007470' 00:12:17.779 killing process with pid 4007470 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 4007470 00:12:17.779 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 4007470 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:18.039 00:12:18.039 real 0m17.767s 00:12:18.039 user 1m10.178s 00:12:18.039 sys 0m1.418s 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.039 ************************************ 00:12:18.039 END TEST nvmf_filesystem_no_in_capsule 00:12:18.039 ************************************ 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.039 ************************************ 00:12:18.039 START TEST nvmf_filesystem_in_capsule 00:12:18.039 ************************************ 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=4011486 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 4011486 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4011486 ']' 00:12:18.039 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.040 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.040 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.040 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.040 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.300 [2024-11-19 11:06:26.436676] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:12:18.300 [2024-11-19 11:06:26.436722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.300 [2024-11-19 11:06:26.523314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.300 [2024-11-19 11:06:26.558500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.300 [2024-11-19 11:06:26.558536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.300 [2024-11-19 11:06:26.558544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.300 [2024-11-19 11:06:26.558551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.300 [2024-11-19 11:06:26.558556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.300 [2024-11-19 11:06:26.562877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.300 [2024-11-19 11:06:26.562909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.300 [2024-11-19 11:06:26.563071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.300 [2024-11-19 11:06:26.563163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.300 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.300 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:18.300 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.300 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.300 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 [2024-11-19 11:06:26.695019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 Malloc1 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 [2024-11-19 11:06:26.831674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:18.560 { 00:12:18.560 "name": "Malloc1", 00:12:18.560 "aliases": [ 00:12:18.560 "305ef633-4ee9-45cc-ba57-d20a965be217" 00:12:18.560 ], 00:12:18.560 "product_name": "Malloc disk", 00:12:18.560 "block_size": 512, 00:12:18.560 "num_blocks": 1048576, 00:12:18.560 "uuid": "305ef633-4ee9-45cc-ba57-d20a965be217", 00:12:18.560 "assigned_rate_limits": { 00:12:18.560 "rw_ios_per_sec": 0, 00:12:18.560 "rw_mbytes_per_sec": 0, 00:12:18.560 "r_mbytes_per_sec": 0, 00:12:18.560 "w_mbytes_per_sec": 0 00:12:18.560 }, 00:12:18.560 "claimed": true, 00:12:18.560 "claim_type": "exclusive_write", 00:12:18.560 "zoned": false, 00:12:18.560 "supported_io_types": { 00:12:18.560 "read": true, 00:12:18.560 "write": true, 00:12:18.560 "unmap": true, 00:12:18.560 "flush": true, 00:12:18.560 "reset": true, 00:12:18.560 "nvme_admin": false, 00:12:18.560 "nvme_io": false, 00:12:18.560 "nvme_io_md": false, 00:12:18.560 "write_zeroes": true, 00:12:18.560 "zcopy": true, 00:12:18.560 "get_zone_info": false, 00:12:18.560 "zone_management": false, 00:12:18.560 "zone_append": false, 00:12:18.560 "compare": false, 00:12:18.560 "compare_and_write": false, 00:12:18.560 "abort": true, 00:12:18.560 "seek_hole": false, 00:12:18.560 "seek_data": false, 00:12:18.560 "copy": true, 00:12:18.560 "nvme_iov_md": false 00:12:18.560 }, 00:12:18.560 "memory_domains": [ 00:12:18.560 { 00:12:18.560 "dma_device_id": "system", 00:12:18.560 "dma_device_type": 1 00:12:18.560 }, 00:12:18.560 { 00:12:18.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.560 "dma_device_type": 2 00:12:18.560 } 00:12:18.560 ], 00:12:18.560 "driver_specific": {} 00:12:18.560 } 00:12:18.560 ]' 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:18.560 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:18.833 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:18.833 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:18.833 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:18.833 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:18.833 11:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.216 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.216 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:20.216 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.216 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:20.216 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.129 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.390 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.390 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.390 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.390 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:22.391 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:22.652 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:22.912 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.854 ************************************ 00:12:23.854 START TEST filesystem_in_capsule_ext4 00:12:23.854 ************************************ 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:23.854 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:23.854 mke2fs 1.47.0 (5-Feb-2023) 00:12:23.854 Discarding device blocks: 0/522240 done 00:12:23.854 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:23.854 Filesystem UUID: 909f9e74-23f6-4a00-973d-5bbe86cf02ea 00:12:23.854 Superblock backups stored on blocks: 00:12:23.854 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:23.854 00:12:23.854 Allocating group tables: 0/64 done 00:12:23.854 Writing inode tables: 0/64 done 00:12:24.114 Creating journal (8192 blocks): done 00:12:25.760 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:12:25.760 00:12:25.760 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:25.760 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:31.051 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4011486 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:31.339 00:12:31.339 real 0m7.395s 00:12:31.339 user 0m0.024s 00:12:31.339 sys 0m0.084s 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:31.339 ************************************ 00:12:31.339 END TEST filesystem_in_capsule_ext4 00:12:31.339 ************************************ 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.339 ************************************ 00:12:31.339 START TEST filesystem_in_capsule_btrfs 00:12:31.339 ************************************ 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:31.339 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:31.340 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:31.340 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:31.658 btrfs-progs v6.8.1 00:12:31.658 See https://btrfs.readthedocs.io for more information. 00:12:31.658 00:12:31.658 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:31.658 NOTE: several default settings have changed in version 5.15, please make sure 00:12:31.658 this does not affect your deployments: 00:12:31.658 - DUP for metadata (-m dup) 00:12:31.658 - enabled no-holes (-O no-holes) 00:12:31.658 - enabled free-space-tree (-R free-space-tree) 00:12:31.658 00:12:31.658 Label: (null) 00:12:31.658 UUID: 74b95891-7ad3-4df2-b1b8-6aaa22f8b88e 00:12:31.658 Node size: 16384 00:12:31.658 Sector size: 4096 (CPU page size: 4096) 00:12:31.658 Filesystem size: 510.00MiB 00:12:31.658 Block group profiles: 00:12:31.658 Data: single 8.00MiB 00:12:31.658 Metadata: DUP 32.00MiB 00:12:31.658 System: DUP 8.00MiB 00:12:31.658 SSD detected: yes 00:12:31.658 Zoned device: no 00:12:31.658 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:31.658 Checksum: crc32c 00:12:31.658 Number of devices: 1 00:12:31.658 Devices: 00:12:31.658 ID SIZE PATH 00:12:31.658 1 510.00MiB /dev/nvme0n1p1 00:12:31.658 00:12:31.658 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:31.658 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4011486 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.623 00:12:32.623 real 0m1.319s 00:12:32.623 user 0m0.036s 00:12:32.623 sys 0m0.114s 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:32.623 ************************************ 00:12:32.623 END TEST filesystem_in_capsule_btrfs 00:12:32.623 ************************************ 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.623 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.884 ************************************ 00:12:32.884 START TEST filesystem_in_capsule_xfs 00:12:32.884 ************************************ 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:32.884 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:32.884 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:32.884 = sectsz=512 attr=2, projid32bit=1 00:12:32.884 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:32.884 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:32.884 data = bsize=4096 blocks=130560, imaxpct=25 00:12:32.884 = sunit=0 swidth=0 blks 00:12:32.884 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:32.884 log =internal log bsize=4096 blocks=16384, version=2 00:12:32.884 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:32.884 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:34.266 Discarding blocks...Done. 00:12:34.266 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:34.266 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4011486 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:36.809 00:12:36.809 real 0m3.665s 00:12:36.809 user 0m0.023s 00:12:36.809 sys 0m0.085s 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:36.809 ************************************ 00:12:36.809 END TEST filesystem_in_capsule_xfs 00:12:36.809 ************************************ 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:36.809 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4011486 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4011486 ']' 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4011486 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4011486 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4011486' 00:12:36.810 killing process with pid 4011486 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 4011486 00:12:36.810 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 4011486 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:37.070 00:12:37.070 real 0m18.838s 00:12:37.070 user 1m14.373s 00:12:37.070 sys 0m1.417s 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.070 ************************************ 00:12:37.070 END TEST nvmf_filesystem_in_capsule 00:12:37.070 ************************************ 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.070 rmmod nvme_tcp 00:12:37.070 rmmod nvme_fabrics 00:12:37.070 rmmod nvme_keyring 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.070 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.071 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.071 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.071 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.617 00:12:39.617 real 0m48.001s 00:12:39.617 user 2m27.174s 00:12:39.617 sys 0m9.570s 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.617 ************************************ 00:12:39.617 END TEST nvmf_filesystem 00:12:39.617 ************************************ 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.617 ************************************ 00:12:39.617 START TEST nvmf_target_discovery 00:12:39.617 ************************************ 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:39.617 * Looking for test storage... 00:12:39.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.617 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:39.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.618 --rc genhtml_branch_coverage=1 00:12:39.618 --rc genhtml_function_coverage=1 00:12:39.618 --rc genhtml_legend=1 00:12:39.618 --rc geninfo_all_blocks=1 00:12:39.618 --rc geninfo_unexecuted_blocks=1 00:12:39.618 00:12:39.618 ' 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:39.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.618 --rc genhtml_branch_coverage=1 00:12:39.618 --rc genhtml_function_coverage=1 00:12:39.618 --rc genhtml_legend=1 00:12:39.618 --rc geninfo_all_blocks=1 00:12:39.618 --rc geninfo_unexecuted_blocks=1 00:12:39.618 00:12:39.618 ' 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:39.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.618 --rc genhtml_branch_coverage=1 00:12:39.618 --rc genhtml_function_coverage=1 00:12:39.618 --rc genhtml_legend=1 00:12:39.618 --rc geninfo_all_blocks=1 00:12:39.618 --rc geninfo_unexecuted_blocks=1 00:12:39.618 00:12:39.618 ' 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:39.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.618 --rc genhtml_branch_coverage=1 00:12:39.618 --rc genhtml_function_coverage=1 00:12:39.618 --rc genhtml_legend=1 00:12:39.618 --rc geninfo_all_blocks=1 00:12:39.618 --rc geninfo_unexecuted_blocks=1 00:12:39.618 00:12:39.618 ' 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.618 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.619 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:47.767 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:47.767 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.767 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:47.768 Found net devices under 0000:31:00.0: cvl_0_0 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:47.768 Found net devices under 0000:31:00.1: cvl_0_1 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:12:47.768 00:12:47.768 --- 10.0.0.2 ping statistics --- 00:12:47.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.768 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:12:47.768 00:12:47.768 --- 10.0.0.1 ping statistics --- 00:12:47.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.768 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=4020084 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 4020084 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 4020084 ']' 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.768 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.768 [2024-11-19 11:06:55.713129] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:12:47.768 [2024-11-19 11:06:55.713183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.768 [2024-11-19 11:06:55.799115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.768 [2024-11-19 11:06:55.835791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.768 [2024-11-19 11:06:55.835823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.768 [2024-11-19 11:06:55.835831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.768 [2024-11-19 11:06:55.835838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.768 [2024-11-19 11:06:55.835844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.768 [2024-11-19 11:06:55.837396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.768 [2024-11-19 11:06:55.837508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.769 [2024-11-19 11:06:55.837661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.769 [2024-11-19 11:06:55.837661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.341 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.341 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:48.341 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.341 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.341 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 [2024-11-19 11:06:56.559592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 Null1 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 [2024-11-19 11:06:56.619920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 Null2 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.342 Null3 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.342 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.603 Null4 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.603 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.604 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:48.866 00:12:48.866 Discovery Log Number of Records 6, Generation counter 6 00:12:48.866 =====Discovery Log Entry 0====== 00:12:48.866 trtype: tcp 00:12:48.866 adrfam: ipv4 00:12:48.866 subtype: current discovery subsystem 00:12:48.866 treq: not required 00:12:48.866 portid: 0 00:12:48.866 trsvcid: 4420 00:12:48.866 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:48.866 traddr: 10.0.0.2 00:12:48.866 eflags: explicit discovery connections, duplicate discovery information 00:12:48.866 sectype: none 00:12:48.866 =====Discovery Log Entry 1====== 00:12:48.866 trtype: tcp 00:12:48.866 adrfam: ipv4 00:12:48.866 subtype: nvme subsystem 00:12:48.866 treq: not required 00:12:48.866 portid: 0 00:12:48.866 trsvcid: 4420 00:12:48.866 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:48.866 traddr: 10.0.0.2 00:12:48.866 eflags: none 00:12:48.866 sectype: none 00:12:48.866 =====Discovery Log Entry 2====== 00:12:48.866 trtype: tcp 00:12:48.866 adrfam: ipv4 00:12:48.866 subtype: nvme subsystem 00:12:48.866 treq: not required 00:12:48.866 portid: 0 00:12:48.866 trsvcid: 4420 00:12:48.866 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:48.866 traddr: 10.0.0.2 00:12:48.866 eflags: none 00:12:48.866 sectype: none 00:12:48.866 =====Discovery Log Entry 3====== 00:12:48.866 trtype: tcp 00:12:48.866 adrfam: ipv4 00:12:48.866 subtype: nvme subsystem 00:12:48.866 treq: not required 00:12:48.866 portid: 0 00:12:48.866 trsvcid: 4420 00:12:48.866 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:48.866 traddr: 10.0.0.2 00:12:48.866 eflags: none 00:12:48.866 sectype: none 00:12:48.866 =====Discovery Log Entry 4====== 00:12:48.866 trtype: tcp 00:12:48.866 adrfam: ipv4 00:12:48.866 subtype: nvme subsystem 00:12:48.866 treq: not required 00:12:48.866 portid: 0 00:12:48.866 trsvcid: 4420 00:12:48.866 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:48.866 traddr: 10.0.0.2 00:12:48.866 eflags: none 00:12:48.866 sectype: none 00:12:48.866 =====Discovery Log Entry 5====== 00:12:48.866 trtype: tcp 00:12:48.866 adrfam: ipv4 00:12:48.866 subtype: discovery subsystem referral 00:12:48.866 treq: not required 00:12:48.866 portid: 0 00:12:48.866 trsvcid: 4430 00:12:48.866 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:48.866 traddr: 10.0.0.2 00:12:48.866 eflags: none 00:12:48.866 sectype: none 00:12:48.866 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:48.866 Perform nvmf subsystem discovery via RPC 00:12:48.866 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:48.866 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.866 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.866 [ 00:12:48.866 { 00:12:48.866 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:48.866 "subtype": "Discovery", 00:12:48.866 "listen_addresses": [ 00:12:48.866 { 00:12:48.866 "trtype": "TCP", 00:12:48.866 "adrfam": "IPv4", 00:12:48.866 "traddr": "10.0.0.2", 00:12:48.866 "trsvcid": "4420" 00:12:48.866 } 00:12:48.866 ], 00:12:48.866 "allow_any_host": true, 00:12:48.866 "hosts": [] 00:12:48.866 }, 00:12:48.866 { 00:12:48.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.866 "subtype": "NVMe", 00:12:48.866 "listen_addresses": [ 00:12:48.866 { 00:12:48.866 "trtype": "TCP", 00:12:48.866 "adrfam": "IPv4", 00:12:48.866 "traddr": "10.0.0.2", 00:12:48.866 "trsvcid": "4420" 00:12:48.866 } 00:12:48.866 ], 00:12:48.866 "allow_any_host": true, 00:12:48.866 "hosts": [], 00:12:48.866 "serial_number": "SPDK00000000000001", 00:12:48.866 "model_number": "SPDK bdev Controller", 00:12:48.866 "max_namespaces": 32, 00:12:48.866 "min_cntlid": 1, 00:12:48.866 "max_cntlid": 65519, 00:12:48.866 "namespaces": [ 00:12:48.866 { 00:12:48.866 "nsid": 1, 00:12:48.866 "bdev_name": "Null1", 00:12:48.866 "name": "Null1", 00:12:48.866 "nguid": "723E1128F7D94E788A9115F1A92DAA0E", 00:12:48.866 "uuid": "723e1128-f7d9-4e78-8a91-15f1a92daa0e" 00:12:48.866 } 00:12:48.866 ] 00:12:48.866 }, 00:12:48.866 { 00:12:48.866 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:48.866 "subtype": "NVMe", 00:12:48.866 "listen_addresses": [ 00:12:48.866 { 00:12:48.866 "trtype": "TCP", 00:12:48.866 "adrfam": "IPv4", 00:12:48.866 "traddr": "10.0.0.2", 00:12:48.866 "trsvcid": "4420" 00:12:48.866 } 00:12:48.866 ], 00:12:48.866 "allow_any_host": true, 00:12:48.866 "hosts": [], 00:12:48.866 "serial_number": "SPDK00000000000002", 00:12:48.866 "model_number": "SPDK bdev Controller", 00:12:48.866 "max_namespaces": 32, 00:12:48.866 "min_cntlid": 1, 00:12:48.866 "max_cntlid": 65519, 00:12:48.866 "namespaces": [ 00:12:48.866 { 00:12:48.866 "nsid": 1, 00:12:48.866 "bdev_name": "Null2", 00:12:48.866 "name": "Null2", 00:12:48.866 "nguid": "4B2FB1B98078401EB98D42217842F642", 00:12:48.866 "uuid": "4b2fb1b9-8078-401e-b98d-42217842f642" 00:12:48.866 } 00:12:48.866 ] 00:12:48.866 }, 00:12:48.866 { 00:12:48.866 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:48.866 "subtype": "NVMe", 00:12:48.866 "listen_addresses": [ 00:12:48.866 { 00:12:48.866 "trtype": "TCP", 00:12:48.866 "adrfam": "IPv4", 00:12:48.866 "traddr": "10.0.0.2", 00:12:48.866 "trsvcid": "4420" 00:12:48.866 } 00:12:48.866 ], 00:12:48.866 "allow_any_host": true, 00:12:48.866 "hosts": [], 00:12:48.866 "serial_number": "SPDK00000000000003", 00:12:48.866 "model_number": "SPDK bdev Controller", 00:12:48.866 "max_namespaces": 32, 00:12:48.866 "min_cntlid": 1, 00:12:48.866 "max_cntlid": 65519, 00:12:48.866 "namespaces": [ 00:12:48.866 { 00:12:48.866 "nsid": 1, 00:12:48.866 "bdev_name": "Null3", 00:12:48.866 "name": "Null3", 00:12:48.866 "nguid": "ECAFDACBD4A94824BE233B58BE9FF985", 00:12:48.866 "uuid": "ecafdacb-d4a9-4824-be23-3b58be9ff985" 00:12:48.866 } 00:12:48.866 ] 00:12:48.866 }, 00:12:48.866 { 00:12:48.866 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:48.866 "subtype": "NVMe", 00:12:48.866 "listen_addresses": [ 00:12:48.866 { 00:12:48.866 "trtype": "TCP", 00:12:48.866 "adrfam": "IPv4", 00:12:48.866 "traddr": "10.0.0.2", 00:12:48.866 "trsvcid": "4420" 00:12:48.866 } 00:12:48.866 ], 00:12:48.866 "allow_any_host": true, 00:12:48.866 "hosts": [], 00:12:48.866 "serial_number": "SPDK00000000000004", 00:12:48.866 "model_number": "SPDK bdev Controller", 00:12:48.866 "max_namespaces": 32, 00:12:48.866 "min_cntlid": 1, 00:12:48.866 "max_cntlid": 65519, 00:12:48.866 "namespaces": [ 00:12:48.866 { 00:12:48.866 "nsid": 1, 00:12:48.866 "bdev_name": "Null4", 00:12:48.866 "name": "Null4", 00:12:48.866 "nguid": "2C38BE2D29ED47C080B51C83F70841A9", 00:12:48.866 "uuid": "2c38be2d-29ed-47c0-80b5-1c83f70841a9" 00:12:48.866 } 00:12:48.866 ] 00:12:48.866 } 00:12:48.866 ] 00:12:48.866 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.866 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:48.866 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.867 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.867 rmmod nvme_tcp 00:12:48.867 rmmod nvme_fabrics 00:12:49.128 rmmod nvme_keyring 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 4020084 ']' 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 4020084 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 4020084 ']' 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 4020084 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4020084 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4020084' 00:12:49.128 killing process with pid 4020084 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 4020084 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 4020084 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.128 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.129 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.675 00:12:51.675 real 0m12.038s 00:12:51.675 user 0m8.781s 00:12:51.675 sys 0m6.461s 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 ************************************ 00:12:51.675 END TEST nvmf_target_discovery 00:12:51.675 ************************************ 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 ************************************ 00:12:51.675 START TEST nvmf_referrals 00:12:51.675 ************************************ 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:51.675 * Looking for test storage... 00:12:51.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.675 --rc genhtml_branch_coverage=1 00:12:51.675 --rc genhtml_function_coverage=1 00:12:51.675 --rc genhtml_legend=1 00:12:51.675 --rc geninfo_all_blocks=1 00:12:51.675 --rc geninfo_unexecuted_blocks=1 00:12:51.675 00:12:51.675 ' 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.675 --rc genhtml_branch_coverage=1 00:12:51.675 --rc genhtml_function_coverage=1 00:12:51.675 --rc genhtml_legend=1 00:12:51.675 --rc geninfo_all_blocks=1 00:12:51.675 --rc geninfo_unexecuted_blocks=1 00:12:51.675 00:12:51.675 ' 00:12:51.675 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.676 --rc genhtml_branch_coverage=1 00:12:51.676 --rc genhtml_function_coverage=1 00:12:51.676 --rc genhtml_legend=1 00:12:51.676 --rc geninfo_all_blocks=1 00:12:51.676 --rc geninfo_unexecuted_blocks=1 00:12:51.676 00:12:51.676 ' 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.676 --rc genhtml_branch_coverage=1 00:12:51.676 --rc genhtml_function_coverage=1 00:12:51.676 --rc genhtml_legend=1 00:12:51.676 --rc geninfo_all_blocks=1 00:12:51.676 --rc geninfo_unexecuted_blocks=1 00:12:51.676 00:12:51.676 ' 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.676 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:59.819 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:59.819 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.819 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:59.819 Found net devices under 0000:31:00.0: cvl_0_0 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.819 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:59.820 Found net devices under 0000:31:00.1: cvl_0_1 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.820 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:00.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:13:00.080 00:13:00.080 --- 10.0.0.2 ping statistics --- 00:13:00.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.080 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:13:00.080 00:13:00.080 --- 10.0.0.1 ping statistics --- 00:13:00.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.080 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=4025136 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 4025136 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 4025136 ']' 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.080 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.080 [2024-11-19 11:07:08.430178] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:13:00.081 [2024-11-19 11:07:08.430233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.340 [2024-11-19 11:07:08.516702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.340 [2024-11-19 11:07:08.552291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.340 [2024-11-19 11:07:08.552324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.340 [2024-11-19 11:07:08.552332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.340 [2024-11-19 11:07:08.552339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.340 [2024-11-19 11:07:08.552345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.340 [2024-11-19 11:07:08.554004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.340 [2024-11-19 11:07:08.554119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.340 [2024-11-19 11:07:08.554273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.340 [2024-11-19 11:07:08.554274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.911 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.911 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:00.911 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.911 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.911 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.171 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.171 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.171 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.171 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.171 [2024-11-19 11:07:09.276017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.171 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.171 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.172 [2024-11-19 11:07:09.288236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:01.172 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:01.433 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:01.694 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:01.955 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:01.955 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:01.955 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:01.955 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:01.955 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:01.955 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.955 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:02.215 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:02.215 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:02.215 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:02.215 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:02.215 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.215 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:02.215 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:02.215 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:02.216 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:02.477 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:02.477 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:02.477 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:02.477 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:02.477 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:02.477 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.477 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:02.739 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:02.739 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:02.739 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:02.739 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:02.739 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:02.739 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:02.739 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:02.739 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:02.739 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.739 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:03.000 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:03.262 rmmod nvme_tcp 00:13:03.262 rmmod nvme_fabrics 00:13:03.262 rmmod nvme_keyring 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 4025136 ']' 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 4025136 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 4025136 ']' 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 4025136 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4025136 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4025136' 00:13:03.262 killing process with pid 4025136 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 4025136 00:13:03.262 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 4025136 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.524 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.437 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:05.437 00:13:05.437 real 0m14.089s 00:13:05.437 user 0m15.708s 00:13:05.437 sys 0m7.152s 00:13:05.437 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.437 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:05.437 ************************************ 00:13:05.437 END TEST nvmf_referrals 00:13:05.437 ************************************ 00:13:05.437 11:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:05.437 11:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:05.437 11:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.437 11:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:05.437 ************************************ 00:13:05.437 START TEST nvmf_connect_disconnect 00:13:05.437 ************************************ 00:13:05.437 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:05.699 * Looking for test storage... 00:13:05.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:05.699 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:05.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.700 --rc genhtml_branch_coverage=1 00:13:05.700 --rc genhtml_function_coverage=1 00:13:05.700 --rc genhtml_legend=1 00:13:05.700 --rc geninfo_all_blocks=1 00:13:05.700 --rc geninfo_unexecuted_blocks=1 00:13:05.700 00:13:05.700 ' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:05.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.700 --rc genhtml_branch_coverage=1 00:13:05.700 --rc genhtml_function_coverage=1 00:13:05.700 --rc genhtml_legend=1 00:13:05.700 --rc geninfo_all_blocks=1 00:13:05.700 --rc geninfo_unexecuted_blocks=1 00:13:05.700 00:13:05.700 ' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:05.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.700 --rc genhtml_branch_coverage=1 00:13:05.700 --rc genhtml_function_coverage=1 00:13:05.700 --rc genhtml_legend=1 00:13:05.700 --rc geninfo_all_blocks=1 00:13:05.700 --rc geninfo_unexecuted_blocks=1 00:13:05.700 00:13:05.700 ' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:05.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.700 --rc genhtml_branch_coverage=1 00:13:05.700 --rc genhtml_function_coverage=1 00:13:05.700 --rc genhtml_legend=1 00:13:05.700 --rc geninfo_all_blocks=1 00:13:05.700 --rc geninfo_unexecuted_blocks=1 00:13:05.700 00:13:05.700 ' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:05.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:05.700 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:05.700 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.704 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.704 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.704 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.704 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.704 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.704 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.704 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:15.705 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:15.705 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:15.705 Found net devices under 0000:31:00.0: cvl_0_0 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:15.705 Found net devices under 0000:31:00.1: cvl_0_1 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:13:15.705 00:13:15.705 --- 10.0.0.2 ping statistics --- 00:13:15.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.705 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:13:15.705 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:13:15.705 00:13:15.705 --- 10.0.0.1 ping statistics --- 00:13:15.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.705 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=4030596 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 4030596 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 4030596 ']' 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.706 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 [2024-11-19 11:07:22.694652] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:13:15.706 [2024-11-19 11:07:22.694719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.706 [2024-11-19 11:07:22.788419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.706 [2024-11-19 11:07:22.830441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.706 [2024-11-19 11:07:22.830478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.706 [2024-11-19 11:07:22.830487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.706 [2024-11-19 11:07:22.830494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.706 [2024-11-19 11:07:22.830500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.706 [2024-11-19 11:07:22.832164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.706 [2024-11-19 11:07:22.832288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.706 [2024-11-19 11:07:22.832443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.706 [2024-11-19 11:07:22.832444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 [2024-11-19 11:07:23.551554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 [2024-11-19 11:07:23.620204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:15.706 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:19.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:34.002 rmmod nvme_tcp 00:13:34.002 rmmod nvme_fabrics 00:13:34.002 rmmod nvme_keyring 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 4030596 ']' 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 4030596 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4030596 ']' 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 4030596 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.002 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4030596 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4030596' 00:13:34.002 killing process with pid 4030596 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 4030596 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 4030596 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.002 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.914 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.176 00:13:36.176 real 0m30.493s 00:13:36.176 user 1m19.399s 00:13:36.176 sys 0m8.043s 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.176 ************************************ 00:13:36.176 END TEST nvmf_connect_disconnect 00:13:36.176 ************************************ 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.176 ************************************ 00:13:36.176 START TEST nvmf_multitarget 00:13:36.176 ************************************ 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:36.176 * Looking for test storage... 00:13:36.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.176 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:36.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.438 --rc genhtml_branch_coverage=1 00:13:36.438 --rc genhtml_function_coverage=1 00:13:36.438 --rc genhtml_legend=1 00:13:36.438 --rc geninfo_all_blocks=1 00:13:36.438 --rc geninfo_unexecuted_blocks=1 00:13:36.438 00:13:36.438 ' 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:36.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.438 --rc genhtml_branch_coverage=1 00:13:36.438 --rc genhtml_function_coverage=1 00:13:36.438 --rc genhtml_legend=1 00:13:36.438 --rc geninfo_all_blocks=1 00:13:36.438 --rc geninfo_unexecuted_blocks=1 00:13:36.438 00:13:36.438 ' 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:36.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.438 --rc genhtml_branch_coverage=1 00:13:36.438 --rc genhtml_function_coverage=1 00:13:36.438 --rc genhtml_legend=1 00:13:36.438 --rc geninfo_all_blocks=1 00:13:36.438 --rc geninfo_unexecuted_blocks=1 00:13:36.438 00:13:36.438 ' 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:36.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.438 --rc genhtml_branch_coverage=1 00:13:36.438 --rc genhtml_function_coverage=1 00:13:36.438 --rc genhtml_legend=1 00:13:36.438 --rc geninfo_all_blocks=1 00:13:36.438 --rc geninfo_unexecuted_blocks=1 00:13:36.438 00:13:36.438 ' 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.438 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.439 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.829 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:44.830 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:44.830 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:44.830 Found net devices under 0000:31:00.0: cvl_0_0 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:44.830 Found net devices under 0000:31:00.1: cvl_0_1 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:44.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:13:44.830 00:13:44.830 --- 10.0.0.2 ping statistics --- 00:13:44.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.830 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:13:44.830 00:13:44.830 --- 10.0.0.1 ping statistics --- 00:13:44.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.830 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:44.830 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:44.830 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:44.830 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:44.830 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.830 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:44.830 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=4039265 00:13:44.830 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 4039265 00:13:44.830 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.830 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 4039265 ']' 00:13:44.831 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.831 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.831 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.831 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.831 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 [2024-11-19 11:07:53.094220] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:13:45.092 [2024-11-19 11:07:53.094295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.092 [2024-11-19 11:07:53.187795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.092 [2024-11-19 11:07:53.230909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.092 [2024-11-19 11:07:53.230947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.092 [2024-11-19 11:07:53.230956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.092 [2024-11-19 11:07:53.230963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.092 [2024-11-19 11:07:53.230969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.092 [2024-11-19 11:07:53.232608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.092 [2024-11-19 11:07:53.232759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.092 [2024-11-19 11:07:53.232950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.092 [2024-11-19 11:07:53.232950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:45.661 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:45.921 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:45.921 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:45.921 "nvmf_tgt_1" 00:13:45.921 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:45.921 "nvmf_tgt_2" 00:13:45.921 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:45.921 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:46.180 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:46.180 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:46.180 true 00:13:46.180 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:46.440 true 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.440 rmmod nvme_tcp 00:13:46.440 rmmod nvme_fabrics 00:13:46.440 rmmod nvme_keyring 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 4039265 ']' 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 4039265 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 4039265 ']' 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 4039265 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.440 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4039265 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4039265' 00:13:46.700 killing process with pid 4039265 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 4039265 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 4039265 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.700 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:49.243 00:13:49.243 real 0m12.667s 00:13:49.243 user 0m10.111s 00:13:49.243 sys 0m6.746s 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:49.243 ************************************ 00:13:49.243 END TEST nvmf_multitarget 00:13:49.243 ************************************ 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.243 ************************************ 00:13:49.243 START TEST nvmf_rpc 00:13:49.243 ************************************ 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:49.243 * Looking for test storage... 00:13:49.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:49.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.243 --rc genhtml_branch_coverage=1 00:13:49.243 --rc genhtml_function_coverage=1 00:13:49.243 --rc genhtml_legend=1 00:13:49.243 --rc geninfo_all_blocks=1 00:13:49.243 --rc geninfo_unexecuted_blocks=1 00:13:49.243 00:13:49.243 ' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:49.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.243 --rc genhtml_branch_coverage=1 00:13:49.243 --rc genhtml_function_coverage=1 00:13:49.243 --rc genhtml_legend=1 00:13:49.243 --rc geninfo_all_blocks=1 00:13:49.243 --rc geninfo_unexecuted_blocks=1 00:13:49.243 00:13:49.243 ' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:49.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.243 --rc genhtml_branch_coverage=1 00:13:49.243 --rc genhtml_function_coverage=1 00:13:49.243 --rc genhtml_legend=1 00:13:49.243 --rc geninfo_all_blocks=1 00:13:49.243 --rc geninfo_unexecuted_blocks=1 00:13:49.243 00:13:49.243 ' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:49.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.243 --rc genhtml_branch_coverage=1 00:13:49.243 --rc genhtml_function_coverage=1 00:13:49.243 --rc genhtml_legend=1 00:13:49.243 --rc geninfo_all_blocks=1 00:13:49.243 --rc geninfo_unexecuted_blocks=1 00:13:49.243 00:13:49.243 ' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.243 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.244 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:57.389 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:57.389 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:57.389 Found net devices under 0000:31:00.0: cvl_0_0 00:13:57.389 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:57.390 Found net devices under 0000:31:00.1: cvl_0_1 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:57.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:13:57.390 00:13:57.390 --- 10.0.0.2 ping statistics --- 00:13:57.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.390 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:13:57.390 00:13:57.390 --- 10.0.0.1 ping statistics --- 00:13:57.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.390 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=4044448 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 4044448 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 4044448 ']' 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.390 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.651 [2024-11-19 11:08:05.756943] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:13:57.651 [2024-11-19 11:08:05.757011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.651 [2024-11-19 11:08:05.850027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.651 [2024-11-19 11:08:05.891325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.651 [2024-11-19 11:08:05.891361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.651 [2024-11-19 11:08:05.891368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.651 [2024-11-19 11:08:05.891375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.651 [2024-11-19 11:08:05.891381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.651 [2024-11-19 11:08:05.892924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.651 [2024-11-19 11:08:05.893179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.651 [2024-11-19 11:08:05.893179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.651 [2024-11-19 11:08:05.892983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.222 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.222 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:58.222 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.222 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.222 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:58.484 "tick_rate": 2400000000, 00:13:58.484 "poll_groups": [ 00:13:58.484 { 00:13:58.484 "name": "nvmf_tgt_poll_group_000", 00:13:58.484 "admin_qpairs": 0, 00:13:58.484 "io_qpairs": 0, 00:13:58.484 "current_admin_qpairs": 0, 00:13:58.484 "current_io_qpairs": 0, 00:13:58.484 "pending_bdev_io": 0, 00:13:58.484 "completed_nvme_io": 0, 00:13:58.484 "transports": [] 00:13:58.484 }, 00:13:58.484 { 00:13:58.484 "name": "nvmf_tgt_poll_group_001", 00:13:58.484 "admin_qpairs": 0, 00:13:58.484 "io_qpairs": 0, 00:13:58.484 "current_admin_qpairs": 0, 00:13:58.484 "current_io_qpairs": 0, 00:13:58.484 "pending_bdev_io": 0, 00:13:58.484 "completed_nvme_io": 0, 00:13:58.484 "transports": [] 00:13:58.484 }, 00:13:58.484 { 00:13:58.484 "name": "nvmf_tgt_poll_group_002", 00:13:58.484 "admin_qpairs": 0, 00:13:58.484 "io_qpairs": 0, 00:13:58.484 "current_admin_qpairs": 0, 00:13:58.484 "current_io_qpairs": 0, 00:13:58.484 "pending_bdev_io": 0, 00:13:58.484 "completed_nvme_io": 0, 00:13:58.484 "transports": [] 00:13:58.484 }, 00:13:58.484 { 00:13:58.484 "name": "nvmf_tgt_poll_group_003", 00:13:58.484 "admin_qpairs": 0, 00:13:58.484 "io_qpairs": 0, 00:13:58.484 "current_admin_qpairs": 0, 00:13:58.484 "current_io_qpairs": 0, 00:13:58.484 "pending_bdev_io": 0, 00:13:58.484 "completed_nvme_io": 0, 00:13:58.484 "transports": [] 00:13:58.484 } 00:13:58.484 ] 00:13:58.484 }' 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.484 [2024-11-19 11:08:06.720261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.484 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:58.484 "tick_rate": 2400000000, 00:13:58.484 "poll_groups": [ 00:13:58.484 { 00:13:58.484 "name": "nvmf_tgt_poll_group_000", 00:13:58.484 "admin_qpairs": 0, 00:13:58.484 "io_qpairs": 0, 00:13:58.484 "current_admin_qpairs": 0, 00:13:58.484 "current_io_qpairs": 0, 00:13:58.484 "pending_bdev_io": 0, 00:13:58.484 "completed_nvme_io": 0, 00:13:58.484 "transports": [ 00:13:58.484 { 00:13:58.484 "trtype": "TCP" 00:13:58.484 } 00:13:58.484 ] 00:13:58.484 }, 00:13:58.484 { 00:13:58.484 "name": "nvmf_tgt_poll_group_001", 00:13:58.484 "admin_qpairs": 0, 00:13:58.484 "io_qpairs": 0, 00:13:58.484 "current_admin_qpairs": 0, 00:13:58.484 "current_io_qpairs": 0, 00:13:58.484 "pending_bdev_io": 0, 00:13:58.484 "completed_nvme_io": 0, 00:13:58.484 "transports": [ 00:13:58.484 { 00:13:58.484 "trtype": "TCP" 00:13:58.484 } 00:13:58.484 ] 00:13:58.484 }, 00:13:58.484 { 00:13:58.484 "name": "nvmf_tgt_poll_group_002", 00:13:58.484 "admin_qpairs": 0, 00:13:58.484 "io_qpairs": 0, 00:13:58.484 "current_admin_qpairs": 0, 00:13:58.484 "current_io_qpairs": 0, 00:13:58.484 "pending_bdev_io": 0, 00:13:58.484 "completed_nvme_io": 0, 00:13:58.484 "transports": [ 00:13:58.484 { 00:13:58.484 "trtype": "TCP" 00:13:58.484 } 00:13:58.484 ] 00:13:58.484 }, 00:13:58.485 { 00:13:58.485 "name": "nvmf_tgt_poll_group_003", 00:13:58.485 "admin_qpairs": 0, 00:13:58.485 "io_qpairs": 0, 00:13:58.485 "current_admin_qpairs": 0, 00:13:58.485 "current_io_qpairs": 0, 00:13:58.485 "pending_bdev_io": 0, 00:13:58.485 "completed_nvme_io": 0, 00:13:58.485 "transports": [ 00:13:58.485 { 00:13:58.485 "trtype": "TCP" 00:13:58.485 } 00:13:58.485 ] 00:13:58.485 } 00:13:58.485 ] 00:13:58.485 }' 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:58.485 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.747 Malloc1 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.747 [2024-11-19 11:08:06.924306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:58.747 [2024-11-19 11:08:06.961244] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:58.747 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:58.747 could not add new controller: failed to write to nvme-fabrics device 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.747 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.660 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.660 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:00.661 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.661 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:00.661 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.575 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.576 [2024-11-19 11:08:10.724719] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:14:02.576 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:02.576 could not add new controller: failed to write to nvme-fabrics device 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.576 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:03.955 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:03.955 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:03.955 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.955 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:03.955 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:06.499 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 [2024-11-19 11:08:14.462721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.500 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.884 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:07.884 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:07.884 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.884 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:07.884 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.803 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 [2024-11-19 11:08:18.224643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.979 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.979 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:11.979 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.979 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:11.979 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.896 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.896 [2024-11-19 11:08:21.997338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.896 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.896 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:13.897 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.897 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.897 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.897 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:13.897 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.897 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.897 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.897 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.283 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:15.283 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:15.283 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.283 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:15.283 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:17.199 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:17.199 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:17.199 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.199 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:17.199 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.199 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:17.199 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.461 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.461 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:17.461 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:17.461 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.461 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:17.461 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.461 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:17.461 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 [2024-11-19 11:08:25.722162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.462 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.376 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.376 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.376 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.376 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:19.376 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.289 [2024-11-19 11:08:29.442359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.289 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.673 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.673 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:22.673 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.673 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:22.673 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:25.219 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:25.219 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:25.219 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.219 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:25.219 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.219 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:25.219 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 [2024-11-19 11:08:33.176327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 [2024-11-19 11:08:33.248506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.219 [2024-11-19 11:08:33.316689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.219 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 [2024-11-19 11:08:33.384911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 [2024-11-19 11:08:33.453124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.220 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:25.220 "tick_rate": 2400000000, 00:14:25.220 "poll_groups": [ 00:14:25.220 { 00:14:25.220 "name": "nvmf_tgt_poll_group_000", 00:14:25.220 "admin_qpairs": 0, 00:14:25.220 "io_qpairs": 224, 00:14:25.220 "current_admin_qpairs": 0, 00:14:25.220 "current_io_qpairs": 0, 00:14:25.220 "pending_bdev_io": 0, 00:14:25.220 "completed_nvme_io": 225, 00:14:25.220 "transports": [ 00:14:25.220 { 00:14:25.220 "trtype": "TCP" 00:14:25.220 } 00:14:25.220 ] 00:14:25.220 }, 00:14:25.220 { 00:14:25.220 "name": "nvmf_tgt_poll_group_001", 00:14:25.220 "admin_qpairs": 1, 00:14:25.220 "io_qpairs": 223, 00:14:25.220 "current_admin_qpairs": 0, 00:14:25.220 "current_io_qpairs": 0, 00:14:25.220 "pending_bdev_io": 0, 00:14:25.220 "completed_nvme_io": 274, 00:14:25.220 "transports": [ 00:14:25.220 { 00:14:25.220 "trtype": "TCP" 00:14:25.220 } 00:14:25.220 ] 00:14:25.220 }, 00:14:25.220 { 00:14:25.220 "name": "nvmf_tgt_poll_group_002", 00:14:25.220 "admin_qpairs": 6, 00:14:25.220 "io_qpairs": 218, 00:14:25.220 "current_admin_qpairs": 0, 00:14:25.220 "current_io_qpairs": 0, 00:14:25.220 "pending_bdev_io": 0, 00:14:25.220 "completed_nvme_io": 471, 00:14:25.220 "transports": [ 00:14:25.220 { 00:14:25.220 "trtype": "TCP" 00:14:25.220 } 00:14:25.220 ] 00:14:25.220 }, 00:14:25.220 { 00:14:25.220 "name": "nvmf_tgt_poll_group_003", 00:14:25.220 "admin_qpairs": 0, 00:14:25.220 "io_qpairs": 224, 00:14:25.220 "current_admin_qpairs": 0, 00:14:25.220 "current_io_qpairs": 0, 00:14:25.221 "pending_bdev_io": 0, 00:14:25.221 "completed_nvme_io": 269, 00:14:25.221 "transports": [ 00:14:25.221 { 00:14:25.221 "trtype": "TCP" 00:14:25.221 } 00:14:25.221 ] 00:14:25.221 } 00:14:25.221 ] 00:14:25.221 }' 00:14:25.221 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:25.221 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:25.221 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:25.221 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:25.482 rmmod nvme_tcp 00:14:25.482 rmmod nvme_fabrics 00:14:25.482 rmmod nvme_keyring 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 4044448 ']' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 4044448 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 4044448 ']' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 4044448 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4044448 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4044448' 00:14:25.482 killing process with pid 4044448 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 4044448 00:14:25.482 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 4044448 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.742 11:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.655 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:27.655 00:14:27.655 real 0m38.863s 00:14:27.655 user 1m54.098s 00:14:27.655 sys 0m8.496s 00:14:27.655 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.655 11:08:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.655 ************************************ 00:14:27.655 END TEST nvmf_rpc 00:14:27.655 ************************************ 00:14:27.655 11:08:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:27.655 11:08:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.655 11:08:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.655 11:08:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.917 ************************************ 00:14:27.917 START TEST nvmf_invalid 00:14:27.917 ************************************ 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:27.917 * Looking for test storage... 00:14:27.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:27.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.917 --rc genhtml_branch_coverage=1 00:14:27.917 --rc genhtml_function_coverage=1 00:14:27.917 --rc genhtml_legend=1 00:14:27.917 --rc geninfo_all_blocks=1 00:14:27.917 --rc geninfo_unexecuted_blocks=1 00:14:27.917 00:14:27.917 ' 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:27.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.917 --rc genhtml_branch_coverage=1 00:14:27.917 --rc genhtml_function_coverage=1 00:14:27.917 --rc genhtml_legend=1 00:14:27.917 --rc geninfo_all_blocks=1 00:14:27.917 --rc geninfo_unexecuted_blocks=1 00:14:27.917 00:14:27.917 ' 00:14:27.917 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:27.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.917 --rc genhtml_branch_coverage=1 00:14:27.917 --rc genhtml_function_coverage=1 00:14:27.917 --rc genhtml_legend=1 00:14:27.917 --rc geninfo_all_blocks=1 00:14:27.917 --rc geninfo_unexecuted_blocks=1 00:14:27.918 00:14:27.918 ' 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:27.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.918 --rc genhtml_branch_coverage=1 00:14:27.918 --rc genhtml_function_coverage=1 00:14:27.918 --rc genhtml_legend=1 00:14:27.918 --rc geninfo_all_blocks=1 00:14:27.918 --rc geninfo_unexecuted_blocks=1 00:14:27.918 00:14:27.918 ' 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.918 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:28.179 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:28.180 11:08:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:36.321 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:36.321 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:36.321 Found net devices under 0000:31:00.0: cvl_0_0 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.321 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:36.322 Found net devices under 0000:31:00.1: cvl_0_1 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:36.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:14:36.322 00:14:36.322 --- 10.0.0.2 ping statistics --- 00:14:36.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.322 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:14:36.322 00:14:36.322 --- 10.0.0.1 ping statistics --- 00:14:36.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.322 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:36.322 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=4054673 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 4054673 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 4054673 ']' 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.583 11:08:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:36.583 [2024-11-19 11:08:44.750780] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:14:36.583 [2024-11-19 11:08:44.750829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.583 [2024-11-19 11:08:44.835338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.583 [2024-11-19 11:08:44.871000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.583 [2024-11-19 11:08:44.871029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.583 [2024-11-19 11:08:44.871038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.583 [2024-11-19 11:08:44.871048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.583 [2024-11-19 11:08:44.871054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.583 [2024-11-19 11:08:44.872810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.583 [2024-11-19 11:08:44.872941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.583 [2024-11-19 11:08:44.872997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.583 [2024-11-19 11:08:44.872998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode544 00:14:37.526 [2024-11-19 11:08:45.763225] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:37.526 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:37.526 { 00:14:37.526 "nqn": "nqn.2016-06.io.spdk:cnode544", 00:14:37.526 "tgt_name": "foobar", 00:14:37.526 "method": "nvmf_create_subsystem", 00:14:37.526 "req_id": 1 00:14:37.526 } 00:14:37.527 Got JSON-RPC error response 00:14:37.527 response: 00:14:37.527 { 00:14:37.527 "code": -32603, 00:14:37.527 "message": "Unable to find target foobar" 00:14:37.527 }' 00:14:37.527 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:37.527 { 00:14:37.527 "nqn": "nqn.2016-06.io.spdk:cnode544", 00:14:37.527 "tgt_name": "foobar", 00:14:37.527 "method": "nvmf_create_subsystem", 00:14:37.527 "req_id": 1 00:14:37.527 } 00:14:37.527 Got JSON-RPC error response 00:14:37.527 response: 00:14:37.527 { 00:14:37.527 "code": -32603, 00:14:37.527 "message": "Unable to find target foobar" 00:14:37.527 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:37.527 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:37.527 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9385 00:14:37.787 [2024-11-19 11:08:45.955921] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9385: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:37.787 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:37.787 { 00:14:37.787 "nqn": "nqn.2016-06.io.spdk:cnode9385", 00:14:37.787 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:37.787 "method": "nvmf_create_subsystem", 00:14:37.787 "req_id": 1 00:14:37.787 } 00:14:37.787 Got JSON-RPC error response 00:14:37.787 response: 00:14:37.787 { 00:14:37.787 "code": -32602, 00:14:37.787 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:37.787 }' 00:14:37.788 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:37.788 { 00:14:37.788 "nqn": "nqn.2016-06.io.spdk:cnode9385", 00:14:37.788 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:37.788 "method": "nvmf_create_subsystem", 00:14:37.788 "req_id": 1 00:14:37.788 } 00:14:37.788 Got JSON-RPC error response 00:14:37.788 response: 00:14:37.788 { 00:14:37.788 "code": -32602, 00:14:37.788 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:37.788 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:37.788 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:37.788 11:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5299 00:14:38.051 [2024-11-19 11:08:46.148564] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5299: invalid model number 'SPDK_Controller' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:38.051 { 00:14:38.051 "nqn": "nqn.2016-06.io.spdk:cnode5299", 00:14:38.051 "model_number": "SPDK_Controller\u001f", 00:14:38.051 "method": "nvmf_create_subsystem", 00:14:38.051 "req_id": 1 00:14:38.051 } 00:14:38.051 Got JSON-RPC error response 00:14:38.051 response: 00:14:38.051 { 00:14:38.051 "code": -32602, 00:14:38.051 "message": "Invalid MN SPDK_Controller\u001f" 00:14:38.051 }' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:38.051 { 00:14:38.051 "nqn": "nqn.2016-06.io.spdk:cnode5299", 00:14:38.051 "model_number": "SPDK_Controller\u001f", 00:14:38.051 "method": "nvmf_create_subsystem", 00:14:38.051 "req_id": 1 00:14:38.051 } 00:14:38.051 Got JSON-RPC error response 00:14:38.051 response: 00:14:38.051 { 00:14:38.051 "code": -32602, 00:14:38.051 "message": "Invalid MN SPDK_Controller\u001f" 00:14:38.051 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:38.051 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ']MVl^Ln#g#;lzl](Iljre' 00:14:38.052 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ']MVl^Ln#g#;lzl](Iljre' nqn.2016-06.io.spdk:cnode21185 00:14:38.316 [2024-11-19 11:08:46.505719] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21185: invalid serial number ']MVl^Ln#g#;lzl](Iljre' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:38.316 { 00:14:38.316 "nqn": "nqn.2016-06.io.spdk:cnode21185", 00:14:38.316 "serial_number": "]MVl^Ln#g#;lzl](Iljre", 00:14:38.316 "method": "nvmf_create_subsystem", 00:14:38.316 "req_id": 1 00:14:38.316 } 00:14:38.316 Got JSON-RPC error response 00:14:38.316 response: 00:14:38.316 { 00:14:38.316 "code": -32602, 00:14:38.316 "message": "Invalid SN ]MVl^Ln#g#;lzl](Iljre" 00:14:38.316 }' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:38.316 { 00:14:38.316 "nqn": "nqn.2016-06.io.spdk:cnode21185", 00:14:38.316 "serial_number": "]MVl^Ln#g#;lzl](Iljre", 00:14:38.316 "method": "nvmf_create_subsystem", 00:14:38.316 "req_id": 1 00:14:38.316 } 00:14:38.316 Got JSON-RPC error response 00:14:38.316 response: 00:14:38.316 { 00:14:38.316 "code": -32602, 00:14:38.316 "message": "Invalid SN ]MVl^Ln#g#;lzl](Iljre" 00:14:38.316 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.316 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.317 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:38.579 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:14:38.580 11:08:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '.>*zK&"!nrg,K9xyBeu_V*u*zK&"!nrg,K9xyBeu_V*u*zK&"!nrg,K9xyBeu_V*u!nrg,K9xyBeu_V*u*zK&\"!nrg,K9xyBeu_V*u*zK&\"!nrg,K9xyBeu_V*u*zK&\"!nrg,K9xyBeu_V*u /dev/null' 00:14:40.668 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.709 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.709 00:14:42.709 real 0m14.947s 00:14:42.709 user 0m20.890s 00:14:42.709 sys 0m7.315s 00:14:42.709 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.709 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:42.709 ************************************ 00:14:42.709 END TEST nvmf_invalid 00:14:42.709 ************************************ 00:14:42.709 11:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:42.709 11:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:42.709 11:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.709 11:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.988 ************************************ 00:14:42.988 START TEST nvmf_connect_stress 00:14:42.988 ************************************ 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:42.988 * Looking for test storage... 00:14:42.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:42.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.988 --rc genhtml_branch_coverage=1 00:14:42.988 --rc genhtml_function_coverage=1 00:14:42.988 --rc genhtml_legend=1 00:14:42.988 --rc geninfo_all_blocks=1 00:14:42.988 --rc geninfo_unexecuted_blocks=1 00:14:42.988 00:14:42.988 ' 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:42.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.988 --rc genhtml_branch_coverage=1 00:14:42.988 --rc genhtml_function_coverage=1 00:14:42.988 --rc genhtml_legend=1 00:14:42.988 --rc geninfo_all_blocks=1 00:14:42.988 --rc geninfo_unexecuted_blocks=1 00:14:42.988 00:14:42.988 ' 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:42.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.988 --rc genhtml_branch_coverage=1 00:14:42.988 --rc genhtml_function_coverage=1 00:14:42.988 --rc genhtml_legend=1 00:14:42.988 --rc geninfo_all_blocks=1 00:14:42.988 --rc geninfo_unexecuted_blocks=1 00:14:42.988 00:14:42.988 ' 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:42.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.988 --rc genhtml_branch_coverage=1 00:14:42.988 --rc genhtml_function_coverage=1 00:14:42.988 --rc genhtml_legend=1 00:14:42.988 --rc geninfo_all_blocks=1 00:14:42.988 --rc geninfo_unexecuted_blocks=1 00:14:42.988 00:14:42.988 ' 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.988 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:42.989 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:51.135 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:51.135 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.135 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:51.136 Found net devices under 0000:31:00.0: cvl_0_0 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:51.136 Found net devices under 0000:31:00.1: cvl_0_1 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:51.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:14:51.136 00:14:51.136 --- 10.0.0.2 ping statistics --- 00:14:51.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.136 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:14:51.136 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:14:51.397 00:14:51.397 --- 10.0.0.1 ping statistics --- 00:14:51.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.397 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.397 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=4060321 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 4060321 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 4060321 ']' 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.398 11:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.398 [2024-11-19 11:08:59.607028] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:14:51.398 [2024-11-19 11:08:59.607099] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.398 [2024-11-19 11:08:59.719133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:51.659 [2024-11-19 11:08:59.771936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.659 [2024-11-19 11:08:59.771987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.659 [2024-11-19 11:08:59.771995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.659 [2024-11-19 11:08:59.772002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.659 [2024-11-19 11:08:59.772009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.659 [2024-11-19 11:08:59.773909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.659 [2024-11-19 11:08:59.774167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.659 [2024-11-19 11:08:59.774167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.232 [2024-11-19 11:09:00.441271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.232 [2024-11-19 11:09:00.465636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.232 NULL1 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4060590 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.232 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:52.233 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:52.494 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:52.494 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.494 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.494 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.755 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.755 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:52.755 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.755 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.755 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.016 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.016 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:53.016 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.016 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.016 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.277 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.277 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:53.277 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.277 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.277 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.537 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.537 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:53.538 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.538 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.538 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.111 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.111 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:54.111 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.111 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.111 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.371 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.371 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:54.371 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.371 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.371 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.633 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.633 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:54.633 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.633 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.633 11:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.895 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.895 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:54.895 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.895 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.895 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.468 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.468 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:55.468 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.468 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.468 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.729 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.729 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:55.729 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.729 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.729 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.991 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.991 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:55.991 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.991 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.991 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.252 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.252 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:56.252 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.252 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.252 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.513 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.513 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:56.513 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.513 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.513 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.085 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.085 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:57.085 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.085 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.085 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.345 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.345 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:57.345 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.345 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.345 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.605 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.605 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:57.605 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.605 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.605 11:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.866 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.866 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:57.866 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.866 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.866 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.128 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.128 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:58.128 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.128 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.129 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.701 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.701 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:58.701 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.701 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.701 11:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.962 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.962 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:58.962 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.962 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.962 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.222 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.222 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:59.222 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.222 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.222 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.484 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.484 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:59.484 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.484 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.484 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.745 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.745 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:14:59.745 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.745 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.745 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.319 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.319 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:15:00.319 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.319 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.319 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.580 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.580 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:15:00.580 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.580 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.580 11:09:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.841 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.841 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:15:00.841 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.841 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.841 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.101 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.101 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:15:01.101 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.101 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.101 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.362 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.362 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:15:01.362 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.362 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.362 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.932 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.932 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:15:01.932 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.932 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.932 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.193 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.193 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:15:02.193 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.193 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.193 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.454 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4060590 00:15:02.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4060590) - No such process 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4060590 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:02.454 rmmod nvme_tcp 00:15:02.454 rmmod nvme_fabrics 00:15:02.454 rmmod nvme_keyring 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 4060321 ']' 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 4060321 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 4060321 ']' 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 4060321 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4060321 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4060321' 00:15:02.454 killing process with pid 4060321 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 4060321 00:15:02.454 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 4060321 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.715 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.629 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:04.629 00:15:04.629 real 0m21.899s 00:15:04.629 user 0m42.167s 00:15:04.629 sys 0m9.744s 00:15:04.629 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.891 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.891 ************************************ 00:15:04.891 END TEST nvmf_connect_stress 00:15:04.891 ************************************ 00:15:04.891 11:09:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:04.891 11:09:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:04.891 11:09:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.891 11:09:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.891 ************************************ 00:15:04.891 START TEST nvmf_fused_ordering 00:15:04.891 ************************************ 00:15:04.891 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:04.891 * Looking for test storage... 00:15:04.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.892 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:04.892 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:04.892 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:05.154 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:05.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.155 --rc genhtml_branch_coverage=1 00:15:05.155 --rc genhtml_function_coverage=1 00:15:05.155 --rc genhtml_legend=1 00:15:05.155 --rc geninfo_all_blocks=1 00:15:05.155 --rc geninfo_unexecuted_blocks=1 00:15:05.155 00:15:05.155 ' 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:05.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.155 --rc genhtml_branch_coverage=1 00:15:05.155 --rc genhtml_function_coverage=1 00:15:05.155 --rc genhtml_legend=1 00:15:05.155 --rc geninfo_all_blocks=1 00:15:05.155 --rc geninfo_unexecuted_blocks=1 00:15:05.155 00:15:05.155 ' 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:05.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.155 --rc genhtml_branch_coverage=1 00:15:05.155 --rc genhtml_function_coverage=1 00:15:05.155 --rc genhtml_legend=1 00:15:05.155 --rc geninfo_all_blocks=1 00:15:05.155 --rc geninfo_unexecuted_blocks=1 00:15:05.155 00:15:05.155 ' 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:05.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.155 --rc genhtml_branch_coverage=1 00:15:05.155 --rc genhtml_function_coverage=1 00:15:05.155 --rc genhtml_legend=1 00:15:05.155 --rc geninfo_all_blocks=1 00:15:05.155 --rc geninfo_unexecuted_blocks=1 00:15:05.155 00:15:05.155 ' 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:05.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:05.155 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:05.156 11:09:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:13.323 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:13.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:13.323 Found net devices under 0000:31:00.0: cvl_0_0 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:13.323 Found net devices under 0000:31:00.1: cvl_0_1 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.323 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:13.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:15:13.324 00:15:13.324 --- 10.0.0.2 ping statistics --- 00:15:13.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.324 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:15:13.324 00:15:13.324 --- 10.0.0.1 ping statistics --- 00:15:13.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.324 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=4067834 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 4067834 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 4067834 ']' 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.324 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.586 [2024-11-19 11:09:21.690707] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:15:13.587 [2024-11-19 11:09:21.690756] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.587 [2024-11-19 11:09:21.792578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.587 [2024-11-19 11:09:21.832464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.587 [2024-11-19 11:09:21.832508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.587 [2024-11-19 11:09:21.832516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.587 [2024-11-19 11:09:21.832523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.587 [2024-11-19 11:09:21.832529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.587 [2024-11-19 11:09:21.833257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.161 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.161 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:14.161 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.161 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.161 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.423 [2024-11-19 11:09:22.544760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.423 [2024-11-19 11:09:22.569084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.423 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.424 NULL1 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.424 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:14.424 [2024-11-19 11:09:22.639625] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:15:14.424 [2024-11-19 11:09:22.639691] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068076 ] 00:15:14.685 Attached to nqn.2016-06.io.spdk:cnode1 00:15:14.685 Namespace ID: 1 size: 1GB 00:15:14.685 fused_ordering(0) 00:15:14.685 fused_ordering(1) 00:15:14.685 fused_ordering(2) 00:15:14.685 fused_ordering(3) 00:15:14.685 fused_ordering(4) 00:15:14.685 fused_ordering(5) 00:15:14.685 fused_ordering(6) 00:15:14.685 fused_ordering(7) 00:15:14.685 fused_ordering(8) 00:15:14.685 fused_ordering(9) 00:15:14.685 fused_ordering(10) 00:15:14.685 fused_ordering(11) 00:15:14.685 fused_ordering(12) 00:15:14.685 fused_ordering(13) 00:15:14.685 fused_ordering(14) 00:15:14.685 fused_ordering(15) 00:15:14.685 fused_ordering(16) 00:15:14.685 fused_ordering(17) 00:15:14.685 fused_ordering(18) 00:15:14.685 fused_ordering(19) 00:15:14.685 fused_ordering(20) 00:15:14.685 fused_ordering(21) 00:15:14.685 fused_ordering(22) 00:15:14.685 fused_ordering(23) 00:15:14.685 fused_ordering(24) 00:15:14.685 fused_ordering(25) 00:15:14.685 fused_ordering(26) 00:15:14.685 fused_ordering(27) 00:15:14.685 fused_ordering(28) 00:15:14.685 fused_ordering(29) 00:15:14.685 fused_ordering(30) 00:15:14.685 fused_ordering(31) 00:15:14.685 fused_ordering(32) 00:15:14.685 fused_ordering(33) 00:15:14.685 fused_ordering(34) 00:15:14.685 fused_ordering(35) 00:15:14.685 fused_ordering(36) 00:15:14.685 fused_ordering(37) 00:15:14.685 fused_ordering(38) 00:15:14.685 fused_ordering(39) 00:15:14.685 fused_ordering(40) 00:15:14.685 fused_ordering(41) 00:15:14.685 fused_ordering(42) 00:15:14.685 fused_ordering(43) 00:15:14.685 fused_ordering(44) 00:15:14.685 fused_ordering(45) 00:15:14.685 fused_ordering(46) 00:15:14.685 fused_ordering(47) 00:15:14.685 fused_ordering(48) 00:15:14.685 fused_ordering(49) 00:15:14.685 fused_ordering(50) 00:15:14.685 fused_ordering(51) 00:15:14.685 fused_ordering(52) 00:15:14.685 fused_ordering(53) 00:15:14.685 fused_ordering(54) 00:15:14.685 fused_ordering(55) 00:15:14.685 fused_ordering(56) 00:15:14.685 fused_ordering(57) 00:15:14.685 fused_ordering(58) 00:15:14.685 fused_ordering(59) 00:15:14.685 fused_ordering(60) 00:15:14.685 fused_ordering(61) 00:15:14.685 fused_ordering(62) 00:15:14.685 fused_ordering(63) 00:15:14.685 fused_ordering(64) 00:15:14.685 fused_ordering(65) 00:15:14.685 fused_ordering(66) 00:15:14.685 fused_ordering(67) 00:15:14.685 fused_ordering(68) 00:15:14.685 fused_ordering(69) 00:15:14.685 fused_ordering(70) 00:15:14.685 fused_ordering(71) 00:15:14.685 fused_ordering(72) 00:15:14.685 fused_ordering(73) 00:15:14.685 fused_ordering(74) 00:15:14.685 fused_ordering(75) 00:15:14.685 fused_ordering(76) 00:15:14.685 fused_ordering(77) 00:15:14.685 fused_ordering(78) 00:15:14.685 fused_ordering(79) 00:15:14.685 fused_ordering(80) 00:15:14.685 fused_ordering(81) 00:15:14.685 fused_ordering(82) 00:15:14.685 fused_ordering(83) 00:15:14.685 fused_ordering(84) 00:15:14.685 fused_ordering(85) 00:15:14.685 fused_ordering(86) 00:15:14.685 fused_ordering(87) 00:15:14.685 fused_ordering(88) 00:15:14.685 fused_ordering(89) 00:15:14.685 fused_ordering(90) 00:15:14.685 fused_ordering(91) 00:15:14.685 fused_ordering(92) 00:15:14.685 fused_ordering(93) 00:15:14.685 fused_ordering(94) 00:15:14.685 fused_ordering(95) 00:15:14.685 fused_ordering(96) 00:15:14.685 fused_ordering(97) 00:15:14.685 fused_ordering(98) 00:15:14.685 fused_ordering(99) 00:15:14.685 fused_ordering(100) 00:15:14.685 fused_ordering(101) 00:15:14.685 fused_ordering(102) 00:15:14.685 fused_ordering(103) 00:15:14.685 fused_ordering(104) 00:15:14.685 fused_ordering(105) 00:15:14.685 fused_ordering(106) 00:15:14.685 fused_ordering(107) 00:15:14.685 fused_ordering(108) 00:15:14.685 fused_ordering(109) 00:15:14.685 fused_ordering(110) 00:15:14.685 fused_ordering(111) 00:15:14.685 fused_ordering(112) 00:15:14.685 fused_ordering(113) 00:15:14.685 fused_ordering(114) 00:15:14.685 fused_ordering(115) 00:15:14.685 fused_ordering(116) 00:15:14.685 fused_ordering(117) 00:15:14.685 fused_ordering(118) 00:15:14.685 fused_ordering(119) 00:15:14.685 fused_ordering(120) 00:15:14.685 fused_ordering(121) 00:15:14.685 fused_ordering(122) 00:15:14.685 fused_ordering(123) 00:15:14.685 fused_ordering(124) 00:15:14.685 fused_ordering(125) 00:15:14.685 fused_ordering(126) 00:15:14.685 fused_ordering(127) 00:15:14.685 fused_ordering(128) 00:15:14.685 fused_ordering(129) 00:15:14.685 fused_ordering(130) 00:15:14.685 fused_ordering(131) 00:15:14.685 fused_ordering(132) 00:15:14.685 fused_ordering(133) 00:15:14.685 fused_ordering(134) 00:15:14.685 fused_ordering(135) 00:15:14.685 fused_ordering(136) 00:15:14.685 fused_ordering(137) 00:15:14.685 fused_ordering(138) 00:15:14.685 fused_ordering(139) 00:15:14.685 fused_ordering(140) 00:15:14.685 fused_ordering(141) 00:15:14.685 fused_ordering(142) 00:15:14.685 fused_ordering(143) 00:15:14.685 fused_ordering(144) 00:15:14.685 fused_ordering(145) 00:15:14.685 fused_ordering(146) 00:15:14.685 fused_ordering(147) 00:15:14.685 fused_ordering(148) 00:15:14.685 fused_ordering(149) 00:15:14.685 fused_ordering(150) 00:15:14.685 fused_ordering(151) 00:15:14.685 fused_ordering(152) 00:15:14.685 fused_ordering(153) 00:15:14.685 fused_ordering(154) 00:15:14.685 fused_ordering(155) 00:15:14.685 fused_ordering(156) 00:15:14.685 fused_ordering(157) 00:15:14.685 fused_ordering(158) 00:15:14.685 fused_ordering(159) 00:15:14.685 fused_ordering(160) 00:15:14.685 fused_ordering(161) 00:15:14.685 fused_ordering(162) 00:15:14.685 fused_ordering(163) 00:15:14.685 fused_ordering(164) 00:15:14.685 fused_ordering(165) 00:15:14.685 fused_ordering(166) 00:15:14.685 fused_ordering(167) 00:15:14.685 fused_ordering(168) 00:15:14.685 fused_ordering(169) 00:15:14.685 fused_ordering(170) 00:15:14.685 fused_ordering(171) 00:15:14.685 fused_ordering(172) 00:15:14.685 fused_ordering(173) 00:15:14.685 fused_ordering(174) 00:15:14.685 fused_ordering(175) 00:15:14.685 fused_ordering(176) 00:15:14.685 fused_ordering(177) 00:15:14.685 fused_ordering(178) 00:15:14.685 fused_ordering(179) 00:15:14.685 fused_ordering(180) 00:15:14.685 fused_ordering(181) 00:15:14.685 fused_ordering(182) 00:15:14.685 fused_ordering(183) 00:15:14.685 fused_ordering(184) 00:15:14.685 fused_ordering(185) 00:15:14.685 fused_ordering(186) 00:15:14.685 fused_ordering(187) 00:15:14.685 fused_ordering(188) 00:15:14.685 fused_ordering(189) 00:15:14.685 fused_ordering(190) 00:15:14.685 fused_ordering(191) 00:15:14.685 fused_ordering(192) 00:15:14.685 fused_ordering(193) 00:15:14.685 fused_ordering(194) 00:15:14.685 fused_ordering(195) 00:15:14.685 fused_ordering(196) 00:15:14.685 fused_ordering(197) 00:15:14.685 fused_ordering(198) 00:15:14.685 fused_ordering(199) 00:15:14.685 fused_ordering(200) 00:15:14.685 fused_ordering(201) 00:15:14.685 fused_ordering(202) 00:15:14.685 fused_ordering(203) 00:15:14.686 fused_ordering(204) 00:15:14.686 fused_ordering(205) 00:15:15.259 fused_ordering(206) 00:15:15.259 fused_ordering(207) 00:15:15.259 fused_ordering(208) 00:15:15.259 fused_ordering(209) 00:15:15.259 fused_ordering(210) 00:15:15.259 fused_ordering(211) 00:15:15.259 fused_ordering(212) 00:15:15.259 fused_ordering(213) 00:15:15.259 fused_ordering(214) 00:15:15.259 fused_ordering(215) 00:15:15.259 fused_ordering(216) 00:15:15.259 fused_ordering(217) 00:15:15.259 fused_ordering(218) 00:15:15.259 fused_ordering(219) 00:15:15.259 fused_ordering(220) 00:15:15.259 fused_ordering(221) 00:15:15.259 fused_ordering(222) 00:15:15.259 fused_ordering(223) 00:15:15.259 fused_ordering(224) 00:15:15.259 fused_ordering(225) 00:15:15.259 fused_ordering(226) 00:15:15.259 fused_ordering(227) 00:15:15.259 fused_ordering(228) 00:15:15.259 fused_ordering(229) 00:15:15.259 fused_ordering(230) 00:15:15.259 fused_ordering(231) 00:15:15.259 fused_ordering(232) 00:15:15.259 fused_ordering(233) 00:15:15.259 fused_ordering(234) 00:15:15.259 fused_ordering(235) 00:15:15.259 fused_ordering(236) 00:15:15.259 fused_ordering(237) 00:15:15.259 fused_ordering(238) 00:15:15.259 fused_ordering(239) 00:15:15.259 fused_ordering(240) 00:15:15.259 fused_ordering(241) 00:15:15.259 fused_ordering(242) 00:15:15.259 fused_ordering(243) 00:15:15.259 fused_ordering(244) 00:15:15.259 fused_ordering(245) 00:15:15.259 fused_ordering(246) 00:15:15.259 fused_ordering(247) 00:15:15.259 fused_ordering(248) 00:15:15.259 fused_ordering(249) 00:15:15.259 fused_ordering(250) 00:15:15.259 fused_ordering(251) 00:15:15.259 fused_ordering(252) 00:15:15.259 fused_ordering(253) 00:15:15.259 fused_ordering(254) 00:15:15.259 fused_ordering(255) 00:15:15.259 fused_ordering(256) 00:15:15.259 fused_ordering(257) 00:15:15.259 fused_ordering(258) 00:15:15.259 fused_ordering(259) 00:15:15.259 fused_ordering(260) 00:15:15.259 fused_ordering(261) 00:15:15.259 fused_ordering(262) 00:15:15.259 fused_ordering(263) 00:15:15.259 fused_ordering(264) 00:15:15.259 fused_ordering(265) 00:15:15.259 fused_ordering(266) 00:15:15.259 fused_ordering(267) 00:15:15.259 fused_ordering(268) 00:15:15.259 fused_ordering(269) 00:15:15.259 fused_ordering(270) 00:15:15.259 fused_ordering(271) 00:15:15.259 fused_ordering(272) 00:15:15.259 fused_ordering(273) 00:15:15.259 fused_ordering(274) 00:15:15.259 fused_ordering(275) 00:15:15.259 fused_ordering(276) 00:15:15.259 fused_ordering(277) 00:15:15.259 fused_ordering(278) 00:15:15.259 fused_ordering(279) 00:15:15.259 fused_ordering(280) 00:15:15.259 fused_ordering(281) 00:15:15.259 fused_ordering(282) 00:15:15.259 fused_ordering(283) 00:15:15.259 fused_ordering(284) 00:15:15.259 fused_ordering(285) 00:15:15.259 fused_ordering(286) 00:15:15.259 fused_ordering(287) 00:15:15.259 fused_ordering(288) 00:15:15.259 fused_ordering(289) 00:15:15.259 fused_ordering(290) 00:15:15.259 fused_ordering(291) 00:15:15.259 fused_ordering(292) 00:15:15.259 fused_ordering(293) 00:15:15.259 fused_ordering(294) 00:15:15.259 fused_ordering(295) 00:15:15.259 fused_ordering(296) 00:15:15.259 fused_ordering(297) 00:15:15.259 fused_ordering(298) 00:15:15.259 fused_ordering(299) 00:15:15.259 fused_ordering(300) 00:15:15.259 fused_ordering(301) 00:15:15.259 fused_ordering(302) 00:15:15.259 fused_ordering(303) 00:15:15.259 fused_ordering(304) 00:15:15.259 fused_ordering(305) 00:15:15.259 fused_ordering(306) 00:15:15.259 fused_ordering(307) 00:15:15.259 fused_ordering(308) 00:15:15.259 fused_ordering(309) 00:15:15.259 fused_ordering(310) 00:15:15.259 fused_ordering(311) 00:15:15.259 fused_ordering(312) 00:15:15.259 fused_ordering(313) 00:15:15.259 fused_ordering(314) 00:15:15.259 fused_ordering(315) 00:15:15.259 fused_ordering(316) 00:15:15.259 fused_ordering(317) 00:15:15.259 fused_ordering(318) 00:15:15.259 fused_ordering(319) 00:15:15.259 fused_ordering(320) 00:15:15.259 fused_ordering(321) 00:15:15.259 fused_ordering(322) 00:15:15.259 fused_ordering(323) 00:15:15.259 fused_ordering(324) 00:15:15.259 fused_ordering(325) 00:15:15.259 fused_ordering(326) 00:15:15.259 fused_ordering(327) 00:15:15.260 fused_ordering(328) 00:15:15.260 fused_ordering(329) 00:15:15.260 fused_ordering(330) 00:15:15.260 fused_ordering(331) 00:15:15.260 fused_ordering(332) 00:15:15.260 fused_ordering(333) 00:15:15.260 fused_ordering(334) 00:15:15.260 fused_ordering(335) 00:15:15.260 fused_ordering(336) 00:15:15.260 fused_ordering(337) 00:15:15.260 fused_ordering(338) 00:15:15.260 fused_ordering(339) 00:15:15.260 fused_ordering(340) 00:15:15.260 fused_ordering(341) 00:15:15.260 fused_ordering(342) 00:15:15.260 fused_ordering(343) 00:15:15.260 fused_ordering(344) 00:15:15.260 fused_ordering(345) 00:15:15.260 fused_ordering(346) 00:15:15.260 fused_ordering(347) 00:15:15.260 fused_ordering(348) 00:15:15.260 fused_ordering(349) 00:15:15.260 fused_ordering(350) 00:15:15.260 fused_ordering(351) 00:15:15.260 fused_ordering(352) 00:15:15.260 fused_ordering(353) 00:15:15.260 fused_ordering(354) 00:15:15.260 fused_ordering(355) 00:15:15.260 fused_ordering(356) 00:15:15.260 fused_ordering(357) 00:15:15.260 fused_ordering(358) 00:15:15.260 fused_ordering(359) 00:15:15.260 fused_ordering(360) 00:15:15.260 fused_ordering(361) 00:15:15.260 fused_ordering(362) 00:15:15.260 fused_ordering(363) 00:15:15.260 fused_ordering(364) 00:15:15.260 fused_ordering(365) 00:15:15.260 fused_ordering(366) 00:15:15.260 fused_ordering(367) 00:15:15.260 fused_ordering(368) 00:15:15.260 fused_ordering(369) 00:15:15.260 fused_ordering(370) 00:15:15.260 fused_ordering(371) 00:15:15.260 fused_ordering(372) 00:15:15.260 fused_ordering(373) 00:15:15.260 fused_ordering(374) 00:15:15.260 fused_ordering(375) 00:15:15.260 fused_ordering(376) 00:15:15.260 fused_ordering(377) 00:15:15.260 fused_ordering(378) 00:15:15.260 fused_ordering(379) 00:15:15.260 fused_ordering(380) 00:15:15.260 fused_ordering(381) 00:15:15.260 fused_ordering(382) 00:15:15.260 fused_ordering(383) 00:15:15.260 fused_ordering(384) 00:15:15.260 fused_ordering(385) 00:15:15.260 fused_ordering(386) 00:15:15.260 fused_ordering(387) 00:15:15.260 fused_ordering(388) 00:15:15.260 fused_ordering(389) 00:15:15.260 fused_ordering(390) 00:15:15.260 fused_ordering(391) 00:15:15.260 fused_ordering(392) 00:15:15.260 fused_ordering(393) 00:15:15.260 fused_ordering(394) 00:15:15.260 fused_ordering(395) 00:15:15.260 fused_ordering(396) 00:15:15.260 fused_ordering(397) 00:15:15.260 fused_ordering(398) 00:15:15.260 fused_ordering(399) 00:15:15.260 fused_ordering(400) 00:15:15.260 fused_ordering(401) 00:15:15.260 fused_ordering(402) 00:15:15.260 fused_ordering(403) 00:15:15.260 fused_ordering(404) 00:15:15.260 fused_ordering(405) 00:15:15.260 fused_ordering(406) 00:15:15.260 fused_ordering(407) 00:15:15.260 fused_ordering(408) 00:15:15.260 fused_ordering(409) 00:15:15.260 fused_ordering(410) 00:15:15.520 fused_ordering(411) 00:15:15.520 fused_ordering(412) 00:15:15.520 fused_ordering(413) 00:15:15.520 fused_ordering(414) 00:15:15.520 fused_ordering(415) 00:15:15.520 fused_ordering(416) 00:15:15.521 fused_ordering(417) 00:15:15.521 fused_ordering(418) 00:15:15.521 fused_ordering(419) 00:15:15.521 fused_ordering(420) 00:15:15.521 fused_ordering(421) 00:15:15.521 fused_ordering(422) 00:15:15.521 fused_ordering(423) 00:15:15.521 fused_ordering(424) 00:15:15.521 fused_ordering(425) 00:15:15.521 fused_ordering(426) 00:15:15.521 fused_ordering(427) 00:15:15.521 fused_ordering(428) 00:15:15.521 fused_ordering(429) 00:15:15.521 fused_ordering(430) 00:15:15.521 fused_ordering(431) 00:15:15.521 fused_ordering(432) 00:15:15.521 fused_ordering(433) 00:15:15.521 fused_ordering(434) 00:15:15.521 fused_ordering(435) 00:15:15.521 fused_ordering(436) 00:15:15.521 fused_ordering(437) 00:15:15.521 fused_ordering(438) 00:15:15.521 fused_ordering(439) 00:15:15.521 fused_ordering(440) 00:15:15.521 fused_ordering(441) 00:15:15.521 fused_ordering(442) 00:15:15.521 fused_ordering(443) 00:15:15.521 fused_ordering(444) 00:15:15.521 fused_ordering(445) 00:15:15.521 fused_ordering(446) 00:15:15.521 fused_ordering(447) 00:15:15.521 fused_ordering(448) 00:15:15.521 fused_ordering(449) 00:15:15.521 fused_ordering(450) 00:15:15.521 fused_ordering(451) 00:15:15.521 fused_ordering(452) 00:15:15.521 fused_ordering(453) 00:15:15.521 fused_ordering(454) 00:15:15.521 fused_ordering(455) 00:15:15.521 fused_ordering(456) 00:15:15.521 fused_ordering(457) 00:15:15.521 fused_ordering(458) 00:15:15.521 fused_ordering(459) 00:15:15.521 fused_ordering(460) 00:15:15.521 fused_ordering(461) 00:15:15.521 fused_ordering(462) 00:15:15.521 fused_ordering(463) 00:15:15.521 fused_ordering(464) 00:15:15.521 fused_ordering(465) 00:15:15.521 fused_ordering(466) 00:15:15.521 fused_ordering(467) 00:15:15.521 fused_ordering(468) 00:15:15.521 fused_ordering(469) 00:15:15.521 fused_ordering(470) 00:15:15.521 fused_ordering(471) 00:15:15.521 fused_ordering(472) 00:15:15.521 fused_ordering(473) 00:15:15.521 fused_ordering(474) 00:15:15.521 fused_ordering(475) 00:15:15.521 fused_ordering(476) 00:15:15.521 fused_ordering(477) 00:15:15.521 fused_ordering(478) 00:15:15.521 fused_ordering(479) 00:15:15.521 fused_ordering(480) 00:15:15.521 fused_ordering(481) 00:15:15.521 fused_ordering(482) 00:15:15.521 fused_ordering(483) 00:15:15.521 fused_ordering(484) 00:15:15.521 fused_ordering(485) 00:15:15.521 fused_ordering(486) 00:15:15.521 fused_ordering(487) 00:15:15.521 fused_ordering(488) 00:15:15.521 fused_ordering(489) 00:15:15.521 fused_ordering(490) 00:15:15.521 fused_ordering(491) 00:15:15.521 fused_ordering(492) 00:15:15.521 fused_ordering(493) 00:15:15.521 fused_ordering(494) 00:15:15.521 fused_ordering(495) 00:15:15.521 fused_ordering(496) 00:15:15.521 fused_ordering(497) 00:15:15.521 fused_ordering(498) 00:15:15.521 fused_ordering(499) 00:15:15.521 fused_ordering(500) 00:15:15.521 fused_ordering(501) 00:15:15.521 fused_ordering(502) 00:15:15.521 fused_ordering(503) 00:15:15.521 fused_ordering(504) 00:15:15.521 fused_ordering(505) 00:15:15.521 fused_ordering(506) 00:15:15.521 fused_ordering(507) 00:15:15.521 fused_ordering(508) 00:15:15.521 fused_ordering(509) 00:15:15.521 fused_ordering(510) 00:15:15.521 fused_ordering(511) 00:15:15.521 fused_ordering(512) 00:15:15.521 fused_ordering(513) 00:15:15.521 fused_ordering(514) 00:15:15.521 fused_ordering(515) 00:15:15.521 fused_ordering(516) 00:15:15.521 fused_ordering(517) 00:15:15.521 fused_ordering(518) 00:15:15.521 fused_ordering(519) 00:15:15.521 fused_ordering(520) 00:15:15.521 fused_ordering(521) 00:15:15.521 fused_ordering(522) 00:15:15.521 fused_ordering(523) 00:15:15.521 fused_ordering(524) 00:15:15.521 fused_ordering(525) 00:15:15.521 fused_ordering(526) 00:15:15.521 fused_ordering(527) 00:15:15.521 fused_ordering(528) 00:15:15.521 fused_ordering(529) 00:15:15.521 fused_ordering(530) 00:15:15.521 fused_ordering(531) 00:15:15.521 fused_ordering(532) 00:15:15.521 fused_ordering(533) 00:15:15.521 fused_ordering(534) 00:15:15.521 fused_ordering(535) 00:15:15.521 fused_ordering(536) 00:15:15.521 fused_ordering(537) 00:15:15.521 fused_ordering(538) 00:15:15.521 fused_ordering(539) 00:15:15.521 fused_ordering(540) 00:15:15.521 fused_ordering(541) 00:15:15.521 fused_ordering(542) 00:15:15.521 fused_ordering(543) 00:15:15.521 fused_ordering(544) 00:15:15.521 fused_ordering(545) 00:15:15.521 fused_ordering(546) 00:15:15.521 fused_ordering(547) 00:15:15.521 fused_ordering(548) 00:15:15.521 fused_ordering(549) 00:15:15.521 fused_ordering(550) 00:15:15.521 fused_ordering(551) 00:15:15.521 fused_ordering(552) 00:15:15.521 fused_ordering(553) 00:15:15.521 fused_ordering(554) 00:15:15.521 fused_ordering(555) 00:15:15.521 fused_ordering(556) 00:15:15.521 fused_ordering(557) 00:15:15.521 fused_ordering(558) 00:15:15.521 fused_ordering(559) 00:15:15.521 fused_ordering(560) 00:15:15.521 fused_ordering(561) 00:15:15.521 fused_ordering(562) 00:15:15.521 fused_ordering(563) 00:15:15.521 fused_ordering(564) 00:15:15.521 fused_ordering(565) 00:15:15.521 fused_ordering(566) 00:15:15.521 fused_ordering(567) 00:15:15.521 fused_ordering(568) 00:15:15.521 fused_ordering(569) 00:15:15.521 fused_ordering(570) 00:15:15.521 fused_ordering(571) 00:15:15.521 fused_ordering(572) 00:15:15.521 fused_ordering(573) 00:15:15.521 fused_ordering(574) 00:15:15.521 fused_ordering(575) 00:15:15.521 fused_ordering(576) 00:15:15.521 fused_ordering(577) 00:15:15.521 fused_ordering(578) 00:15:15.521 fused_ordering(579) 00:15:15.521 fused_ordering(580) 00:15:15.521 fused_ordering(581) 00:15:15.521 fused_ordering(582) 00:15:15.521 fused_ordering(583) 00:15:15.521 fused_ordering(584) 00:15:15.521 fused_ordering(585) 00:15:15.521 fused_ordering(586) 00:15:15.521 fused_ordering(587) 00:15:15.521 fused_ordering(588) 00:15:15.521 fused_ordering(589) 00:15:15.521 fused_ordering(590) 00:15:15.521 fused_ordering(591) 00:15:15.521 fused_ordering(592) 00:15:15.521 fused_ordering(593) 00:15:15.521 fused_ordering(594) 00:15:15.521 fused_ordering(595) 00:15:15.521 fused_ordering(596) 00:15:15.521 fused_ordering(597) 00:15:15.521 fused_ordering(598) 00:15:15.521 fused_ordering(599) 00:15:15.521 fused_ordering(600) 00:15:15.521 fused_ordering(601) 00:15:15.521 fused_ordering(602) 00:15:15.521 fused_ordering(603) 00:15:15.521 fused_ordering(604) 00:15:15.521 fused_ordering(605) 00:15:15.521 fused_ordering(606) 00:15:15.521 fused_ordering(607) 00:15:15.521 fused_ordering(608) 00:15:15.521 fused_ordering(609) 00:15:15.521 fused_ordering(610) 00:15:15.521 fused_ordering(611) 00:15:15.521 fused_ordering(612) 00:15:15.521 fused_ordering(613) 00:15:15.521 fused_ordering(614) 00:15:15.521 fused_ordering(615) 00:15:16.094 fused_ordering(616) 00:15:16.094 fused_ordering(617) 00:15:16.094 fused_ordering(618) 00:15:16.094 fused_ordering(619) 00:15:16.094 fused_ordering(620) 00:15:16.094 fused_ordering(621) 00:15:16.094 fused_ordering(622) 00:15:16.094 fused_ordering(623) 00:15:16.094 fused_ordering(624) 00:15:16.095 fused_ordering(625) 00:15:16.095 fused_ordering(626) 00:15:16.095 fused_ordering(627) 00:15:16.095 fused_ordering(628) 00:15:16.095 fused_ordering(629) 00:15:16.095 fused_ordering(630) 00:15:16.095 fused_ordering(631) 00:15:16.095 fused_ordering(632) 00:15:16.095 fused_ordering(633) 00:15:16.095 fused_ordering(634) 00:15:16.095 fused_ordering(635) 00:15:16.095 fused_ordering(636) 00:15:16.095 fused_ordering(637) 00:15:16.095 fused_ordering(638) 00:15:16.095 fused_ordering(639) 00:15:16.095 fused_ordering(640) 00:15:16.095 fused_ordering(641) 00:15:16.095 fused_ordering(642) 00:15:16.095 fused_ordering(643) 00:15:16.095 fused_ordering(644) 00:15:16.095 fused_ordering(645) 00:15:16.095 fused_ordering(646) 00:15:16.095 fused_ordering(647) 00:15:16.095 fused_ordering(648) 00:15:16.095 fused_ordering(649) 00:15:16.095 fused_ordering(650) 00:15:16.095 fused_ordering(651) 00:15:16.095 fused_ordering(652) 00:15:16.095 fused_ordering(653) 00:15:16.095 fused_ordering(654) 00:15:16.095 fused_ordering(655) 00:15:16.095 fused_ordering(656) 00:15:16.095 fused_ordering(657) 00:15:16.095 fused_ordering(658) 00:15:16.095 fused_ordering(659) 00:15:16.095 fused_ordering(660) 00:15:16.095 fused_ordering(661) 00:15:16.095 fused_ordering(662) 00:15:16.095 fused_ordering(663) 00:15:16.095 fused_ordering(664) 00:15:16.095 fused_ordering(665) 00:15:16.095 fused_ordering(666) 00:15:16.095 fused_ordering(667) 00:15:16.095 fused_ordering(668) 00:15:16.095 fused_ordering(669) 00:15:16.095 fused_ordering(670) 00:15:16.095 fused_ordering(671) 00:15:16.095 fused_ordering(672) 00:15:16.095 fused_ordering(673) 00:15:16.095 fused_ordering(674) 00:15:16.095 fused_ordering(675) 00:15:16.095 fused_ordering(676) 00:15:16.095 fused_ordering(677) 00:15:16.095 fused_ordering(678) 00:15:16.095 fused_ordering(679) 00:15:16.095 fused_ordering(680) 00:15:16.095 fused_ordering(681) 00:15:16.095 fused_ordering(682) 00:15:16.095 fused_ordering(683) 00:15:16.095 fused_ordering(684) 00:15:16.095 fused_ordering(685) 00:15:16.095 fused_ordering(686) 00:15:16.095 fused_ordering(687) 00:15:16.095 fused_ordering(688) 00:15:16.095 fused_ordering(689) 00:15:16.095 fused_ordering(690) 00:15:16.095 fused_ordering(691) 00:15:16.095 fused_ordering(692) 00:15:16.095 fused_ordering(693) 00:15:16.095 fused_ordering(694) 00:15:16.095 fused_ordering(695) 00:15:16.095 fused_ordering(696) 00:15:16.095 fused_ordering(697) 00:15:16.095 fused_ordering(698) 00:15:16.095 fused_ordering(699) 00:15:16.095 fused_ordering(700) 00:15:16.095 fused_ordering(701) 00:15:16.095 fused_ordering(702) 00:15:16.095 fused_ordering(703) 00:15:16.095 fused_ordering(704) 00:15:16.095 fused_ordering(705) 00:15:16.095 fused_ordering(706) 00:15:16.095 fused_ordering(707) 00:15:16.095 fused_ordering(708) 00:15:16.095 fused_ordering(709) 00:15:16.095 fused_ordering(710) 00:15:16.095 fused_ordering(711) 00:15:16.095 fused_ordering(712) 00:15:16.095 fused_ordering(713) 00:15:16.095 fused_ordering(714) 00:15:16.095 fused_ordering(715) 00:15:16.095 fused_ordering(716) 00:15:16.095 fused_ordering(717) 00:15:16.095 fused_ordering(718) 00:15:16.095 fused_ordering(719) 00:15:16.095 fused_ordering(720) 00:15:16.095 fused_ordering(721) 00:15:16.095 fused_ordering(722) 00:15:16.095 fused_ordering(723) 00:15:16.095 fused_ordering(724) 00:15:16.095 fused_ordering(725) 00:15:16.095 fused_ordering(726) 00:15:16.095 fused_ordering(727) 00:15:16.095 fused_ordering(728) 00:15:16.095 fused_ordering(729) 00:15:16.095 fused_ordering(730) 00:15:16.095 fused_ordering(731) 00:15:16.095 fused_ordering(732) 00:15:16.095 fused_ordering(733) 00:15:16.095 fused_ordering(734) 00:15:16.095 fused_ordering(735) 00:15:16.095 fused_ordering(736) 00:15:16.095 fused_ordering(737) 00:15:16.095 fused_ordering(738) 00:15:16.095 fused_ordering(739) 00:15:16.095 fused_ordering(740) 00:15:16.095 fused_ordering(741) 00:15:16.095 fused_ordering(742) 00:15:16.095 fused_ordering(743) 00:15:16.095 fused_ordering(744) 00:15:16.095 fused_ordering(745) 00:15:16.095 fused_ordering(746) 00:15:16.095 fused_ordering(747) 00:15:16.095 fused_ordering(748) 00:15:16.095 fused_ordering(749) 00:15:16.095 fused_ordering(750) 00:15:16.095 fused_ordering(751) 00:15:16.095 fused_ordering(752) 00:15:16.095 fused_ordering(753) 00:15:16.095 fused_ordering(754) 00:15:16.095 fused_ordering(755) 00:15:16.095 fused_ordering(756) 00:15:16.095 fused_ordering(757) 00:15:16.095 fused_ordering(758) 00:15:16.095 fused_ordering(759) 00:15:16.095 fused_ordering(760) 00:15:16.095 fused_ordering(761) 00:15:16.095 fused_ordering(762) 00:15:16.095 fused_ordering(763) 00:15:16.095 fused_ordering(764) 00:15:16.095 fused_ordering(765) 00:15:16.095 fused_ordering(766) 00:15:16.095 fused_ordering(767) 00:15:16.095 fused_ordering(768) 00:15:16.095 fused_ordering(769) 00:15:16.095 fused_ordering(770) 00:15:16.095 fused_ordering(771) 00:15:16.095 fused_ordering(772) 00:15:16.095 fused_ordering(773) 00:15:16.095 fused_ordering(774) 00:15:16.095 fused_ordering(775) 00:15:16.095 fused_ordering(776) 00:15:16.095 fused_ordering(777) 00:15:16.095 fused_ordering(778) 00:15:16.095 fused_ordering(779) 00:15:16.095 fused_ordering(780) 00:15:16.095 fused_ordering(781) 00:15:16.095 fused_ordering(782) 00:15:16.095 fused_ordering(783) 00:15:16.095 fused_ordering(784) 00:15:16.095 fused_ordering(785) 00:15:16.095 fused_ordering(786) 00:15:16.095 fused_ordering(787) 00:15:16.095 fused_ordering(788) 00:15:16.095 fused_ordering(789) 00:15:16.095 fused_ordering(790) 00:15:16.095 fused_ordering(791) 00:15:16.095 fused_ordering(792) 00:15:16.095 fused_ordering(793) 00:15:16.095 fused_ordering(794) 00:15:16.095 fused_ordering(795) 00:15:16.095 fused_ordering(796) 00:15:16.095 fused_ordering(797) 00:15:16.095 fused_ordering(798) 00:15:16.095 fused_ordering(799) 00:15:16.095 fused_ordering(800) 00:15:16.095 fused_ordering(801) 00:15:16.095 fused_ordering(802) 00:15:16.095 fused_ordering(803) 00:15:16.095 fused_ordering(804) 00:15:16.095 fused_ordering(805) 00:15:16.095 fused_ordering(806) 00:15:16.095 fused_ordering(807) 00:15:16.095 fused_ordering(808) 00:15:16.095 fused_ordering(809) 00:15:16.095 fused_ordering(810) 00:15:16.095 fused_ordering(811) 00:15:16.095 fused_ordering(812) 00:15:16.095 fused_ordering(813) 00:15:16.095 fused_ordering(814) 00:15:16.095 fused_ordering(815) 00:15:16.095 fused_ordering(816) 00:15:16.095 fused_ordering(817) 00:15:16.095 fused_ordering(818) 00:15:16.095 fused_ordering(819) 00:15:16.095 fused_ordering(820) 00:15:16.666 fused_ordering(821) 00:15:16.666 fused_ordering(822) 00:15:16.666 fused_ordering(823) 00:15:16.666 fused_ordering(824) 00:15:16.666 fused_ordering(825) 00:15:16.666 fused_ordering(826) 00:15:16.666 fused_ordering(827) 00:15:16.666 fused_ordering(828) 00:15:16.666 fused_ordering(829) 00:15:16.666 fused_ordering(830) 00:15:16.666 fused_ordering(831) 00:15:16.666 fused_ordering(832) 00:15:16.666 fused_ordering(833) 00:15:16.666 fused_ordering(834) 00:15:16.666 fused_ordering(835) 00:15:16.666 fused_ordering(836) 00:15:16.666 fused_ordering(837) 00:15:16.666 fused_ordering(838) 00:15:16.666 fused_ordering(839) 00:15:16.666 fused_ordering(840) 00:15:16.666 fused_ordering(841) 00:15:16.666 fused_ordering(842) 00:15:16.666 fused_ordering(843) 00:15:16.666 fused_ordering(844) 00:15:16.666 fused_ordering(845) 00:15:16.666 fused_ordering(846) 00:15:16.666 fused_ordering(847) 00:15:16.666 fused_ordering(848) 00:15:16.666 fused_ordering(849) 00:15:16.666 fused_ordering(850) 00:15:16.666 fused_ordering(851) 00:15:16.666 fused_ordering(852) 00:15:16.666 fused_ordering(853) 00:15:16.666 fused_ordering(854) 00:15:16.666 fused_ordering(855) 00:15:16.666 fused_ordering(856) 00:15:16.666 fused_ordering(857) 00:15:16.666 fused_ordering(858) 00:15:16.666 fused_ordering(859) 00:15:16.666 fused_ordering(860) 00:15:16.666 fused_ordering(861) 00:15:16.666 fused_ordering(862) 00:15:16.666 fused_ordering(863) 00:15:16.666 fused_ordering(864) 00:15:16.666 fused_ordering(865) 00:15:16.666 fused_ordering(866) 00:15:16.666 fused_ordering(867) 00:15:16.666 fused_ordering(868) 00:15:16.666 fused_ordering(869) 00:15:16.666 fused_ordering(870) 00:15:16.666 fused_ordering(871) 00:15:16.666 fused_ordering(872) 00:15:16.666 fused_ordering(873) 00:15:16.666 fused_ordering(874) 00:15:16.666 fused_ordering(875) 00:15:16.666 fused_ordering(876) 00:15:16.666 fused_ordering(877) 00:15:16.666 fused_ordering(878) 00:15:16.666 fused_ordering(879) 00:15:16.666 fused_ordering(880) 00:15:16.666 fused_ordering(881) 00:15:16.666 fused_ordering(882) 00:15:16.666 fused_ordering(883) 00:15:16.666 fused_ordering(884) 00:15:16.666 fused_ordering(885) 00:15:16.666 fused_ordering(886) 00:15:16.666 fused_ordering(887) 00:15:16.666 fused_ordering(888) 00:15:16.666 fused_ordering(889) 00:15:16.666 fused_ordering(890) 00:15:16.666 fused_ordering(891) 00:15:16.666 fused_ordering(892) 00:15:16.666 fused_ordering(893) 00:15:16.666 fused_ordering(894) 00:15:16.666 fused_ordering(895) 00:15:16.666 fused_ordering(896) 00:15:16.666 fused_ordering(897) 00:15:16.666 fused_ordering(898) 00:15:16.666 fused_ordering(899) 00:15:16.666 fused_ordering(900) 00:15:16.666 fused_ordering(901) 00:15:16.666 fused_ordering(902) 00:15:16.666 fused_ordering(903) 00:15:16.666 fused_ordering(904) 00:15:16.666 fused_ordering(905) 00:15:16.666 fused_ordering(906) 00:15:16.666 fused_ordering(907) 00:15:16.666 fused_ordering(908) 00:15:16.666 fused_ordering(909) 00:15:16.666 fused_ordering(910) 00:15:16.666 fused_ordering(911) 00:15:16.666 fused_ordering(912) 00:15:16.666 fused_ordering(913) 00:15:16.666 fused_ordering(914) 00:15:16.666 fused_ordering(915) 00:15:16.666 fused_ordering(916) 00:15:16.666 fused_ordering(917) 00:15:16.667 fused_ordering(918) 00:15:16.667 fused_ordering(919) 00:15:16.667 fused_ordering(920) 00:15:16.667 fused_ordering(921) 00:15:16.667 fused_ordering(922) 00:15:16.667 fused_ordering(923) 00:15:16.667 fused_ordering(924) 00:15:16.667 fused_ordering(925) 00:15:16.667 fused_ordering(926) 00:15:16.667 fused_ordering(927) 00:15:16.667 fused_ordering(928) 00:15:16.667 fused_ordering(929) 00:15:16.667 fused_ordering(930) 00:15:16.667 fused_ordering(931) 00:15:16.667 fused_ordering(932) 00:15:16.667 fused_ordering(933) 00:15:16.667 fused_ordering(934) 00:15:16.667 fused_ordering(935) 00:15:16.667 fused_ordering(936) 00:15:16.667 fused_ordering(937) 00:15:16.667 fused_ordering(938) 00:15:16.667 fused_ordering(939) 00:15:16.667 fused_ordering(940) 00:15:16.667 fused_ordering(941) 00:15:16.667 fused_ordering(942) 00:15:16.667 fused_ordering(943) 00:15:16.667 fused_ordering(944) 00:15:16.667 fused_ordering(945) 00:15:16.667 fused_ordering(946) 00:15:16.667 fused_ordering(947) 00:15:16.667 fused_ordering(948) 00:15:16.667 fused_ordering(949) 00:15:16.667 fused_ordering(950) 00:15:16.667 fused_ordering(951) 00:15:16.667 fused_ordering(952) 00:15:16.667 fused_ordering(953) 00:15:16.667 fused_ordering(954) 00:15:16.667 fused_ordering(955) 00:15:16.667 fused_ordering(956) 00:15:16.667 fused_ordering(957) 00:15:16.667 fused_ordering(958) 00:15:16.667 fused_ordering(959) 00:15:16.667 fused_ordering(960) 00:15:16.667 fused_ordering(961) 00:15:16.667 fused_ordering(962) 00:15:16.667 fused_ordering(963) 00:15:16.667 fused_ordering(964) 00:15:16.667 fused_ordering(965) 00:15:16.667 fused_ordering(966) 00:15:16.667 fused_ordering(967) 00:15:16.667 fused_ordering(968) 00:15:16.667 fused_ordering(969) 00:15:16.667 fused_ordering(970) 00:15:16.667 fused_ordering(971) 00:15:16.667 fused_ordering(972) 00:15:16.667 fused_ordering(973) 00:15:16.667 fused_ordering(974) 00:15:16.667 fused_ordering(975) 00:15:16.667 fused_ordering(976) 00:15:16.667 fused_ordering(977) 00:15:16.667 fused_ordering(978) 00:15:16.667 fused_ordering(979) 00:15:16.667 fused_ordering(980) 00:15:16.667 fused_ordering(981) 00:15:16.667 fused_ordering(982) 00:15:16.667 fused_ordering(983) 00:15:16.667 fused_ordering(984) 00:15:16.667 fused_ordering(985) 00:15:16.667 fused_ordering(986) 00:15:16.667 fused_ordering(987) 00:15:16.667 fused_ordering(988) 00:15:16.667 fused_ordering(989) 00:15:16.667 fused_ordering(990) 00:15:16.667 fused_ordering(991) 00:15:16.667 fused_ordering(992) 00:15:16.667 fused_ordering(993) 00:15:16.667 fused_ordering(994) 00:15:16.667 fused_ordering(995) 00:15:16.667 fused_ordering(996) 00:15:16.667 fused_ordering(997) 00:15:16.667 fused_ordering(998) 00:15:16.667 fused_ordering(999) 00:15:16.667 fused_ordering(1000) 00:15:16.667 fused_ordering(1001) 00:15:16.667 fused_ordering(1002) 00:15:16.667 fused_ordering(1003) 00:15:16.667 fused_ordering(1004) 00:15:16.667 fused_ordering(1005) 00:15:16.667 fused_ordering(1006) 00:15:16.667 fused_ordering(1007) 00:15:16.667 fused_ordering(1008) 00:15:16.667 fused_ordering(1009) 00:15:16.667 fused_ordering(1010) 00:15:16.667 fused_ordering(1011) 00:15:16.667 fused_ordering(1012) 00:15:16.667 fused_ordering(1013) 00:15:16.667 fused_ordering(1014) 00:15:16.667 fused_ordering(1015) 00:15:16.667 fused_ordering(1016) 00:15:16.667 fused_ordering(1017) 00:15:16.667 fused_ordering(1018) 00:15:16.667 fused_ordering(1019) 00:15:16.667 fused_ordering(1020) 00:15:16.667 fused_ordering(1021) 00:15:16.667 fused_ordering(1022) 00:15:16.667 fused_ordering(1023) 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.667 rmmod nvme_tcp 00:15:16.667 rmmod nvme_fabrics 00:15:16.667 rmmod nvme_keyring 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 4067834 ']' 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 4067834 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 4067834 ']' 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 4067834 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4067834 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4067834' 00:15:16.667 killing process with pid 4067834 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 4067834 00:15:16.667 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 4067834 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.929 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.844 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:18.844 00:15:18.844 real 0m14.066s 00:15:18.844 user 0m7.078s 00:15:18.844 sys 0m7.572s 00:15:18.844 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.844 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:18.844 ************************************ 00:15:18.844 END TEST nvmf_fused_ordering 00:15:18.844 ************************************ 00:15:18.844 11:09:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:18.844 11:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.844 11:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.844 11:09:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:19.107 ************************************ 00:15:19.107 START TEST nvmf_ns_masking 00:15:19.107 ************************************ 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:19.107 * Looking for test storage... 00:15:19.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:19.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.107 --rc genhtml_branch_coverage=1 00:15:19.107 --rc genhtml_function_coverage=1 00:15:19.107 --rc genhtml_legend=1 00:15:19.107 --rc geninfo_all_blocks=1 00:15:19.107 --rc geninfo_unexecuted_blocks=1 00:15:19.107 00:15:19.107 ' 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:19.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.107 --rc genhtml_branch_coverage=1 00:15:19.107 --rc genhtml_function_coverage=1 00:15:19.107 --rc genhtml_legend=1 00:15:19.107 --rc geninfo_all_blocks=1 00:15:19.107 --rc geninfo_unexecuted_blocks=1 00:15:19.107 00:15:19.107 ' 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:19.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.107 --rc genhtml_branch_coverage=1 00:15:19.107 --rc genhtml_function_coverage=1 00:15:19.107 --rc genhtml_legend=1 00:15:19.107 --rc geninfo_all_blocks=1 00:15:19.107 --rc geninfo_unexecuted_blocks=1 00:15:19.107 00:15:19.107 ' 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:19.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.107 --rc genhtml_branch_coverage=1 00:15:19.107 --rc genhtml_function_coverage=1 00:15:19.107 --rc genhtml_legend=1 00:15:19.107 --rc geninfo_all_blocks=1 00:15:19.107 --rc geninfo_unexecuted_blocks=1 00:15:19.107 00:15:19.107 ' 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.107 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:19.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=943a5574-d385-45d7-af86-944fc061db4e 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=04f3eae1-14d9-494f-985b-1073e3227b76 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2b50d658-e5fd-4249-a5aa-479efe45917f 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.108 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.368 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:19.368 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:19.368 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:19.368 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.512 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:27.513 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:27.513 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:27.513 Found net devices under 0000:31:00.0: cvl_0_0 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:27.513 Found net devices under 0000:31:00.1: cvl_0_1 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:27.513 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:27.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:15:27.774 00:15:27.774 --- 10.0.0.2 ping statistics --- 00:15:27.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.774 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:27.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:15:27.774 00:15:27.774 --- 10.0.0.1 ping statistics --- 00:15:27.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.774 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=4073231 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 4073231 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 4073231 ']' 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.774 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:27.774 [2024-11-19 11:09:35.998255] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:15:27.774 [2024-11-19 11:09:35.998322] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.774 [2024-11-19 11:09:36.090142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.035 [2024-11-19 11:09:36.130171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.035 [2024-11-19 11:09:36.130211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.035 [2024-11-19 11:09:36.130219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.035 [2024-11-19 11:09:36.130226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.035 [2024-11-19 11:09:36.130232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.035 [2024-11-19 11:09:36.130831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.606 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.606 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:28.607 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.607 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.607 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:28.607 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.607 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:28.866 [2024-11-19 11:09:36.988277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.867 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:28.867 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:28.867 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:28.867 Malloc1 00:15:28.867 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:29.127 Malloc2 00:15:29.127 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:29.387 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:29.647 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.647 [2024-11-19 11:09:37.918553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.648 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:29.648 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b50d658-e5fd-4249-a5aa-479efe45917f -a 10.0.0.2 -s 4420 -i 4 00:15:29.908 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.908 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:29.908 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.908 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:29.908 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:31.821 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:31.821 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:31.821 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:32.083 [ 0]:0x1 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecf80a9e34af45779b7d09c517044930 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecf80a9e34af45779b7d09c517044930 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.083 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:32.344 [ 0]:0x1 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecf80a9e34af45779b7d09c517044930 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecf80a9e34af45779b7d09c517044930 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:32.344 [ 1]:0x2 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bbf6c27477a45a598e78be0cb2510c9 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bbf6c27477a45a598e78be0cb2510c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:32.344 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.605 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.605 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:32.866 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:32.866 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b50d658-e5fd-4249-a5aa-479efe45917f -a 10.0.0.2 -s 4420 -i 4 00:15:33.126 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:33.126 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:33.126 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.126 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:33.126 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:33.126 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:35.039 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.040 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:35.300 [ 0]:0x2 00:15:35.300 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:35.300 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.300 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bbf6c27477a45a598e78be0cb2510c9 00:15:35.300 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bbf6c27477a45a598e78be0cb2510c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.301 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:35.301 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:35.301 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.301 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:35.301 [ 0]:0x1 00:15:35.301 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:35.301 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.561 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecf80a9e34af45779b7d09c517044930 00:15:35.561 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecf80a9e34af45779b7d09c517044930 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.561 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:35.562 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.562 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:35.562 [ 1]:0x2 00:15:35.562 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:35.562 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.562 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bbf6c27477a45a598e78be0cb2510c9 00:15:35.562 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bbf6c27477a45a598e78be0cb2510c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.562 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:35.823 11:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:35.823 [ 0]:0x2 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bbf6c27477a45a598e78be0cb2510c9 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bbf6c27477a45a598e78be0cb2510c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.823 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:36.083 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:36.084 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b50d658-e5fd-4249-a5aa-479efe45917f -a 10.0.0.2 -s 4420 -i 4 00:15:36.344 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:36.344 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:36.344 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:36.344 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:36.344 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:36.344 11:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:38.271 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:38.271 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:38.271 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.271 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:38.271 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.271 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:38.271 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:38.271 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:38.602 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:38.602 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:38.602 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:38.602 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.602 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:38.602 [ 0]:0x1 00:15:38.602 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:38.602 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecf80a9e34af45779b7d09c517044930 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecf80a9e34af45779b7d09c517044930 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:38.603 [ 1]:0x2 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bbf6c27477a45a598e78be0cb2510c9 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bbf6c27477a45a598e78be0cb2510c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.603 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:38.911 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:38.911 [ 0]:0x2 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bbf6c27477a45a598e78be0cb2510c9 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bbf6c27477a45a598e78be0cb2510c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:38.911 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:39.172 [2024-11-19 11:09:47.261793] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:39.172 request: 00:15:39.172 { 00:15:39.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.172 "nsid": 2, 00:15:39.172 "host": "nqn.2016-06.io.spdk:host1", 00:15:39.172 "method": "nvmf_ns_remove_host", 00:15:39.172 "req_id": 1 00:15:39.172 } 00:15:39.172 Got JSON-RPC error response 00:15:39.172 response: 00:15:39.172 { 00:15:39.172 "code": -32602, 00:15:39.172 "message": "Invalid parameters" 00:15:39.172 } 00:15:39.172 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:39.172 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:39.172 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:39.172 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:39.173 [ 0]:0x2 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bbf6c27477a45a598e78be0cb2510c9 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bbf6c27477a45a598e78be0cb2510c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:39.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4075732 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4075732 /var/tmp/host.sock 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 4075732 ']' 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:39.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.173 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:39.173 [2024-11-19 11:09:47.521306] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:15:39.173 [2024-11-19 11:09:47.521356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4075732 ] 00:15:39.434 [2024-11-19 11:09:47.597202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.434 [2024-11-19 11:09:47.633077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.005 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.005 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:40.005 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.266 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:40.527 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 943a5574-d385-45d7-af86-944fc061db4e 00:15:40.527 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:40.527 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 943A5574D38545D7AF86944FC061DB4E -i 00:15:40.527 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 04f3eae1-14d9-494f-985b-1073e3227b76 00:15:40.527 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:40.527 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 04F3EAE114D9494F985B1073E3227B76 -i 00:15:40.787 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:41.048 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:41.048 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:41.048 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:41.620 nvme0n1 00:15:41.620 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:41.620 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:41.880 nvme1n2 00:15:41.880 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:41.880 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:41.880 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:41.880 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:41.880 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:41.880 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:41.880 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:41.880 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:41.880 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:42.141 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 943a5574-d385-45d7-af86-944fc061db4e == \9\4\3\a\5\5\7\4\-\d\3\8\5\-\4\5\d\7\-\a\f\8\6\-\9\4\4\f\c\0\6\1\d\b\4\e ]] 00:15:42.141 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:42.141 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:42.141 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:42.401 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 04f3eae1-14d9-494f-985b-1073e3227b76 == \0\4\f\3\e\a\e\1\-\1\4\d\9\-\4\9\4\f\-\9\8\5\b\-\1\0\7\3\e\3\2\2\7\b\7\6 ]] 00:15:42.401 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.401 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:42.663 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 943a5574-d385-45d7-af86-944fc061db4e 00:15:42.663 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:42.663 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 943A5574D38545D7AF86944FC061DB4E 00:15:42.663 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:42.663 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 943A5574D38545D7AF86944FC061DB4E 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:42.664 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 943A5574D38545D7AF86944FC061DB4E 00:15:42.925 [2024-11-19 11:09:51.020173] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:42.925 [2024-11-19 11:09:51.020206] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:42.925 [2024-11-19 11:09:51.020216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.925 request: 00:15:42.925 { 00:15:42.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.925 "namespace": { 00:15:42.925 "bdev_name": "invalid", 00:15:42.925 "nsid": 1, 00:15:42.925 "nguid": "943A5574D38545D7AF86944FC061DB4E", 00:15:42.925 "no_auto_visible": false 00:15:42.925 }, 00:15:42.925 "method": "nvmf_subsystem_add_ns", 00:15:42.925 "req_id": 1 00:15:42.925 } 00:15:42.925 Got JSON-RPC error response 00:15:42.925 response: 00:15:42.925 { 00:15:42.926 "code": -32602, 00:15:42.926 "message": "Invalid parameters" 00:15:42.926 } 00:15:42.926 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:42.926 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.926 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.926 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.926 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 943a5574-d385-45d7-af86-944fc061db4e 00:15:42.926 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:42.926 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 943A5574D38545D7AF86944FC061DB4E -i 00:15:42.926 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 4075732 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 4075732 ']' 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 4075732 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4075732 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4075732' 00:15:45.471 killing process with pid 4075732 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 4075732 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 4075732 00:15:45.471 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.732 rmmod nvme_tcp 00:15:45.732 rmmod nvme_fabrics 00:15:45.732 rmmod nvme_keyring 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 4073231 ']' 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 4073231 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 4073231 ']' 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 4073231 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4073231 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4073231' 00:15:45.732 killing process with pid 4073231 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 4073231 00:15:45.732 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 4073231 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.993 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.910 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:47.910 00:15:47.910 real 0m29.003s 00:15:47.910 user 0m31.892s 00:15:47.910 sys 0m8.921s 00:15:47.910 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.910 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:47.910 ************************************ 00:15:47.910 END TEST nvmf_ns_masking 00:15:47.910 ************************************ 00:15:47.910 11:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:47.910 11:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:47.910 11:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.910 11:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.910 11:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 ************************************ 00:15:48.173 START TEST nvmf_nvme_cli 00:15:48.173 ************************************ 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:48.173 * Looking for test storage... 00:15:48.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.173 --rc genhtml_branch_coverage=1 00:15:48.173 --rc genhtml_function_coverage=1 00:15:48.173 --rc genhtml_legend=1 00:15:48.173 --rc geninfo_all_blocks=1 00:15:48.173 --rc geninfo_unexecuted_blocks=1 00:15:48.173 00:15:48.173 ' 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.173 --rc genhtml_branch_coverage=1 00:15:48.173 --rc genhtml_function_coverage=1 00:15:48.173 --rc genhtml_legend=1 00:15:48.173 --rc geninfo_all_blocks=1 00:15:48.173 --rc geninfo_unexecuted_blocks=1 00:15:48.173 00:15:48.173 ' 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.173 --rc genhtml_branch_coverage=1 00:15:48.173 --rc genhtml_function_coverage=1 00:15:48.173 --rc genhtml_legend=1 00:15:48.173 --rc geninfo_all_blocks=1 00:15:48.173 --rc geninfo_unexecuted_blocks=1 00:15:48.173 00:15:48.173 ' 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:48.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.173 --rc genhtml_branch_coverage=1 00:15:48.173 --rc genhtml_function_coverage=1 00:15:48.173 --rc genhtml_legend=1 00:15:48.173 --rc geninfo_all_blocks=1 00:15:48.173 --rc geninfo_unexecuted_blocks=1 00:15:48.173 00:15:48.173 ' 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.173 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.174 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.435 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.435 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.435 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.435 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:48.435 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:48.435 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:48.435 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:56.583 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:56.583 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:56.583 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:56.584 Found net devices under 0000:31:00.0: cvl_0_0 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:56.584 Found net devices under 0000:31:00.1: cvl_0_1 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:56.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:15:56.584 00:15:56.584 --- 10.0.0.2 ping statistics --- 00:15:56.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.584 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:15:56.584 00:15:56.584 --- 10.0.0.1 ping statistics --- 00:15:56.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.584 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=4081802 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 4081802 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 4081802 ']' 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.584 11:10:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.846 [2024-11-19 11:10:04.954451] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:15:56.846 [2024-11-19 11:10:04.954523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.846 [2024-11-19 11:10:05.047933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.846 [2024-11-19 11:10:05.090302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.846 [2024-11-19 11:10:05.090340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.846 [2024-11-19 11:10:05.090348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.846 [2024-11-19 11:10:05.090355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.846 [2024-11-19 11:10:05.090361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.846 [2024-11-19 11:10:05.092143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.846 [2024-11-19 11:10:05.092392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.846 [2024-11-19 11:10:05.092393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.846 [2024-11-19 11:10:05.092228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.418 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.418 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:57.418 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:57.418 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.418 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 [2024-11-19 11:10:05.815071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 Malloc0 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 Malloc1 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 [2024-11-19 11:10:05.912759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.679 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:57.940 00:15:57.940 Discovery Log Number of Records 2, Generation counter 2 00:15:57.940 =====Discovery Log Entry 0====== 00:15:57.940 trtype: tcp 00:15:57.940 adrfam: ipv4 00:15:57.940 subtype: current discovery subsystem 00:15:57.940 treq: not required 00:15:57.940 portid: 0 00:15:57.940 trsvcid: 4420 00:15:57.940 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:57.940 traddr: 10.0.0.2 00:15:57.940 eflags: explicit discovery connections, duplicate discovery information 00:15:57.940 sectype: none 00:15:57.940 =====Discovery Log Entry 1====== 00:15:57.940 trtype: tcp 00:15:57.940 adrfam: ipv4 00:15:57.940 subtype: nvme subsystem 00:15:57.940 treq: not required 00:15:57.940 portid: 0 00:15:57.940 trsvcid: 4420 00:15:57.940 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:57.940 traddr: 10.0.0.2 00:15:57.940 eflags: none 00:15:57.940 sectype: none 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:57.940 11:10:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.857 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:59.857 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:59.857 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.857 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:59.857 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:59.857 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:01.774 /dev/nvme0n2 ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.774 rmmod nvme_tcp 00:16:01.774 rmmod nvme_fabrics 00:16:01.774 rmmod nvme_keyring 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 4081802 ']' 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 4081802 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 4081802 ']' 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 4081802 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.774 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4081802 00:16:01.774 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.774 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.774 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4081802' 00:16:01.774 killing process with pid 4081802 00:16:01.774 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 4081802 00:16:01.774 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 4081802 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.036 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.949 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:03.949 00:16:03.949 real 0m15.997s 00:16:03.949 user 0m22.832s 00:16:03.949 sys 0m7.004s 00:16:03.949 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.949 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.949 ************************************ 00:16:03.949 END TEST nvmf_nvme_cli 00:16:03.949 ************************************ 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.211 ************************************ 00:16:04.211 START TEST nvmf_vfio_user 00:16:04.211 ************************************ 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:04.211 * Looking for test storage... 00:16:04.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.211 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:04.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.474 --rc genhtml_branch_coverage=1 00:16:04.474 --rc genhtml_function_coverage=1 00:16:04.474 --rc genhtml_legend=1 00:16:04.474 --rc geninfo_all_blocks=1 00:16:04.474 --rc geninfo_unexecuted_blocks=1 00:16:04.474 00:16:04.474 ' 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:04.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.474 --rc genhtml_branch_coverage=1 00:16:04.474 --rc genhtml_function_coverage=1 00:16:04.474 --rc genhtml_legend=1 00:16:04.474 --rc geninfo_all_blocks=1 00:16:04.474 --rc geninfo_unexecuted_blocks=1 00:16:04.474 00:16:04.474 ' 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:04.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.474 --rc genhtml_branch_coverage=1 00:16:04.474 --rc genhtml_function_coverage=1 00:16:04.474 --rc genhtml_legend=1 00:16:04.474 --rc geninfo_all_blocks=1 00:16:04.474 --rc geninfo_unexecuted_blocks=1 00:16:04.474 00:16:04.474 ' 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:04.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.474 --rc genhtml_branch_coverage=1 00:16:04.474 --rc genhtml_function_coverage=1 00:16:04.474 --rc genhtml_legend=1 00:16:04.474 --rc geninfo_all_blocks=1 00:16:04.474 --rc geninfo_unexecuted_blocks=1 00:16:04.474 00:16:04.474 ' 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.474 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4083302 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4083302' 00:16:04.475 Process pid: 4083302 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4083302 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 4083302 ']' 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.475 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:04.475 [2024-11-19 11:10:12.662908] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:16:04.475 [2024-11-19 11:10:12.662981] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.475 [2024-11-19 11:10:12.750780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.475 [2024-11-19 11:10:12.791357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.475 [2024-11-19 11:10:12.791392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.475 [2024-11-19 11:10:12.791400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.475 [2024-11-19 11:10:12.791407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.475 [2024-11-19 11:10:12.791413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.475 [2024-11-19 11:10:12.793245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.475 [2024-11-19 11:10:12.793361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.475 [2024-11-19 11:10:12.793520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.475 [2024-11-19 11:10:12.793520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.417 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.417 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:05.417 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:06.360 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:06.360 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:06.360 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:06.360 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:06.360 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:06.360 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:06.620 Malloc1 00:16:06.620 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:06.882 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:07.143 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:07.143 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.143 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:07.143 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:07.403 Malloc2 00:16:07.403 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:07.662 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:07.662 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:07.922 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:07.922 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:07.922 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.922 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:07.922 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:07.922 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:07.922 [2024-11-19 11:10:16.197460] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:16:07.922 [2024-11-19 11:10:16.197504] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084000 ] 00:16:07.922 [2024-11-19 11:10:16.253018] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:07.922 [2024-11-19 11:10:16.261215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:07.922 [2024-11-19 11:10:16.261237] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2d35cf3000 00:16:07.922 [2024-11-19 11:10:16.262212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.922 [2024-11-19 11:10:16.263217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.922 [2024-11-19 11:10:16.264219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.922 [2024-11-19 11:10:16.265220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.922 [2024-11-19 11:10:16.266223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.922 [2024-11-19 11:10:16.267229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.922 [2024-11-19 11:10:16.268238] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.922 [2024-11-19 11:10:16.269240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.922 [2024-11-19 11:10:16.270254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:07.922 [2024-11-19 11:10:16.270263] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2d35ce8000 00:16:07.922 [2024-11-19 11:10:16.271588] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:08.185 [2024-11-19 11:10:16.293017] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:08.185 [2024-11-19 11:10:16.293044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:08.185 [2024-11-19 11:10:16.295389] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:08.185 [2024-11-19 11:10:16.295434] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:08.185 [2024-11-19 11:10:16.295520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:08.185 [2024-11-19 11:10:16.295536] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:08.185 [2024-11-19 11:10:16.295541] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:08.185 [2024-11-19 11:10:16.296384] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:08.185 [2024-11-19 11:10:16.296393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:08.185 [2024-11-19 11:10:16.296401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:08.185 [2024-11-19 11:10:16.297390] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:08.185 [2024-11-19 11:10:16.297399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:08.185 [2024-11-19 11:10:16.297407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:08.185 [2024-11-19 11:10:16.298397] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:08.185 [2024-11-19 11:10:16.298405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:08.185 [2024-11-19 11:10:16.299402] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:08.185 [2024-11-19 11:10:16.299410] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:08.185 [2024-11-19 11:10:16.299415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:08.185 [2024-11-19 11:10:16.299423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:08.185 [2024-11-19 11:10:16.299533] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:08.185 [2024-11-19 11:10:16.299539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:08.185 [2024-11-19 11:10:16.299544] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:08.185 [2024-11-19 11:10:16.300407] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:08.185 [2024-11-19 11:10:16.301405] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:08.185 [2024-11-19 11:10:16.302410] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:08.185 [2024-11-19 11:10:16.303409] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:08.185 [2024-11-19 11:10:16.303465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:08.185 [2024-11-19 11:10:16.304421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:08.185 [2024-11-19 11:10:16.304429] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:08.185 [2024-11-19 11:10:16.304434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:08.185 [2024-11-19 11:10:16.304456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:08.185 [2024-11-19 11:10:16.304464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:08.185 [2024-11-19 11:10:16.304479] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.185 [2024-11-19 11:10:16.304484] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.185 [2024-11-19 11:10:16.304488] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.185 [2024-11-19 11:10:16.304501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.185 [2024-11-19 11:10:16.304537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:08.185 [2024-11-19 11:10:16.304546] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:08.185 [2024-11-19 11:10:16.304551] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:08.185 [2024-11-19 11:10:16.304556] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:08.185 [2024-11-19 11:10:16.304561] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:08.185 [2024-11-19 11:10:16.304568] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:08.185 [2024-11-19 11:10:16.304572] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:08.185 [2024-11-19 11:10:16.304577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:08.185 [2024-11-19 11:10:16.304587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:08.185 [2024-11-19 11:10:16.304599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:08.185 [2024-11-19 11:10:16.304607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:08.185 [2024-11-19 11:10:16.304618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.185 [2024-11-19 11:10:16.304626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.185 [2024-11-19 11:10:16.304635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.186 [2024-11-19 11:10:16.304643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.186 [2024-11-19 11:10:16.304648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.304671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.304679] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:08.186 [2024-11-19 11:10:16.304684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.304713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.304775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304791] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:08.186 [2024-11-19 11:10:16.304796] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:08.186 [2024-11-19 11:10:16.304799] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.186 [2024-11-19 11:10:16.304805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.304819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.304828] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:08.186 [2024-11-19 11:10:16.304836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304854] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.186 [2024-11-19 11:10:16.304858] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.186 [2024-11-19 11:10:16.304874] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.186 [2024-11-19 11:10:16.304881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.304900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.304912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304928] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.186 [2024-11-19 11:10:16.304932] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.186 [2024-11-19 11:10:16.304936] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.186 [2024-11-19 11:10:16.304942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.304956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.304964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.304995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.305000] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:08.186 [2024-11-19 11:10:16.305005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:08.186 [2024-11-19 11:10:16.305010] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:08.186 [2024-11-19 11:10:16.305028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.305038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.305050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.305062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.305074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.305081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.305092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.305099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.305113] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:08.186 [2024-11-19 11:10:16.305118] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:08.186 [2024-11-19 11:10:16.305121] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:08.186 [2024-11-19 11:10:16.305125] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:08.186 [2024-11-19 11:10:16.305128] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:08.186 [2024-11-19 11:10:16.305135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:08.186 [2024-11-19 11:10:16.305142] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:08.186 [2024-11-19 11:10:16.305147] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:08.186 [2024-11-19 11:10:16.305150] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.186 [2024-11-19 11:10:16.305156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.305164] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:08.186 [2024-11-19 11:10:16.305168] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.186 [2024-11-19 11:10:16.305171] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.186 [2024-11-19 11:10:16.305177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.305185] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:08.186 [2024-11-19 11:10:16.305189] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:08.186 [2024-11-19 11:10:16.305193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.186 [2024-11-19 11:10:16.305199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:08.186 [2024-11-19 11:10:16.305206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.305218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.305231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:08.186 [2024-11-19 11:10:16.305238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:08.186 ===================================================== 00:16:08.186 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:08.186 ===================================================== 00:16:08.186 Controller Capabilities/Features 00:16:08.186 ================================ 00:16:08.186 Vendor ID: 4e58 00:16:08.186 Subsystem Vendor ID: 4e58 00:16:08.186 Serial Number: SPDK1 00:16:08.186 Model Number: SPDK bdev Controller 00:16:08.186 Firmware Version: 25.01 00:16:08.186 Recommended Arb Burst: 6 00:16:08.186 IEEE OUI Identifier: 8d 6b 50 00:16:08.186 Multi-path I/O 00:16:08.186 May have multiple subsystem ports: Yes 00:16:08.186 May have multiple controllers: Yes 00:16:08.186 Associated with SR-IOV VF: No 00:16:08.186 Max Data Transfer Size: 131072 00:16:08.186 Max Number of Namespaces: 32 00:16:08.186 Max Number of I/O Queues: 127 00:16:08.186 NVMe Specification Version (VS): 1.3 00:16:08.186 NVMe Specification Version (Identify): 1.3 00:16:08.186 Maximum Queue Entries: 256 00:16:08.186 Contiguous Queues Required: Yes 00:16:08.186 Arbitration Mechanisms Supported 00:16:08.187 Weighted Round Robin: Not Supported 00:16:08.187 Vendor Specific: Not Supported 00:16:08.187 Reset Timeout: 15000 ms 00:16:08.187 Doorbell Stride: 4 bytes 00:16:08.187 NVM Subsystem Reset: Not Supported 00:16:08.187 Command Sets Supported 00:16:08.187 NVM Command Set: Supported 00:16:08.187 Boot Partition: Not Supported 00:16:08.187 Memory Page Size Minimum: 4096 bytes 00:16:08.187 Memory Page Size Maximum: 4096 bytes 00:16:08.187 Persistent Memory Region: Not Supported 00:16:08.187 Optional Asynchronous Events Supported 00:16:08.187 Namespace Attribute Notices: Supported 00:16:08.187 Firmware Activation Notices: Not Supported 00:16:08.187 ANA Change Notices: Not Supported 00:16:08.187 PLE Aggregate Log Change Notices: Not Supported 00:16:08.187 LBA Status Info Alert Notices: Not Supported 00:16:08.187 EGE Aggregate Log Change Notices: Not Supported 00:16:08.187 Normal NVM Subsystem Shutdown event: Not Supported 00:16:08.187 Zone Descriptor Change Notices: Not Supported 00:16:08.187 Discovery Log Change Notices: Not Supported 00:16:08.187 Controller Attributes 00:16:08.187 128-bit Host Identifier: Supported 00:16:08.187 Non-Operational Permissive Mode: Not Supported 00:16:08.187 NVM Sets: Not Supported 00:16:08.187 Read Recovery Levels: Not Supported 00:16:08.187 Endurance Groups: Not Supported 00:16:08.187 Predictable Latency Mode: Not Supported 00:16:08.187 Traffic Based Keep ALive: Not Supported 00:16:08.187 Namespace Granularity: Not Supported 00:16:08.187 SQ Associations: Not Supported 00:16:08.187 UUID List: Not Supported 00:16:08.187 Multi-Domain Subsystem: Not Supported 00:16:08.187 Fixed Capacity Management: Not Supported 00:16:08.187 Variable Capacity Management: Not Supported 00:16:08.187 Delete Endurance Group: Not Supported 00:16:08.187 Delete NVM Set: Not Supported 00:16:08.187 Extended LBA Formats Supported: Not Supported 00:16:08.187 Flexible Data Placement Supported: Not Supported 00:16:08.187 00:16:08.187 Controller Memory Buffer Support 00:16:08.187 ================================ 00:16:08.187 Supported: No 00:16:08.187 00:16:08.187 Persistent Memory Region Support 00:16:08.187 ================================ 00:16:08.187 Supported: No 00:16:08.187 00:16:08.187 Admin Command Set Attributes 00:16:08.187 ============================ 00:16:08.187 Security Send/Receive: Not Supported 00:16:08.187 Format NVM: Not Supported 00:16:08.187 Firmware Activate/Download: Not Supported 00:16:08.187 Namespace Management: Not Supported 00:16:08.187 Device Self-Test: Not Supported 00:16:08.187 Directives: Not Supported 00:16:08.187 NVMe-MI: Not Supported 00:16:08.187 Virtualization Management: Not Supported 00:16:08.187 Doorbell Buffer Config: Not Supported 00:16:08.187 Get LBA Status Capability: Not Supported 00:16:08.187 Command & Feature Lockdown Capability: Not Supported 00:16:08.187 Abort Command Limit: 4 00:16:08.187 Async Event Request Limit: 4 00:16:08.187 Number of Firmware Slots: N/A 00:16:08.187 Firmware Slot 1 Read-Only: N/A 00:16:08.187 Firmware Activation Without Reset: N/A 00:16:08.187 Multiple Update Detection Support: N/A 00:16:08.187 Firmware Update Granularity: No Information Provided 00:16:08.187 Per-Namespace SMART Log: No 00:16:08.187 Asymmetric Namespace Access Log Page: Not Supported 00:16:08.187 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:08.187 Command Effects Log Page: Supported 00:16:08.187 Get Log Page Extended Data: Supported 00:16:08.187 Telemetry Log Pages: Not Supported 00:16:08.187 Persistent Event Log Pages: Not Supported 00:16:08.187 Supported Log Pages Log Page: May Support 00:16:08.187 Commands Supported & Effects Log Page: Not Supported 00:16:08.187 Feature Identifiers & Effects Log Page:May Support 00:16:08.187 NVMe-MI Commands & Effects Log Page: May Support 00:16:08.187 Data Area 4 for Telemetry Log: Not Supported 00:16:08.187 Error Log Page Entries Supported: 128 00:16:08.187 Keep Alive: Supported 00:16:08.187 Keep Alive Granularity: 10000 ms 00:16:08.187 00:16:08.187 NVM Command Set Attributes 00:16:08.187 ========================== 00:16:08.187 Submission Queue Entry Size 00:16:08.187 Max: 64 00:16:08.187 Min: 64 00:16:08.187 Completion Queue Entry Size 00:16:08.187 Max: 16 00:16:08.187 Min: 16 00:16:08.187 Number of Namespaces: 32 00:16:08.187 Compare Command: Supported 00:16:08.187 Write Uncorrectable Command: Not Supported 00:16:08.187 Dataset Management Command: Supported 00:16:08.187 Write Zeroes Command: Supported 00:16:08.187 Set Features Save Field: Not Supported 00:16:08.187 Reservations: Not Supported 00:16:08.187 Timestamp: Not Supported 00:16:08.187 Copy: Supported 00:16:08.187 Volatile Write Cache: Present 00:16:08.187 Atomic Write Unit (Normal): 1 00:16:08.187 Atomic Write Unit (PFail): 1 00:16:08.187 Atomic Compare & Write Unit: 1 00:16:08.187 Fused Compare & Write: Supported 00:16:08.187 Scatter-Gather List 00:16:08.187 SGL Command Set: Supported (Dword aligned) 00:16:08.187 SGL Keyed: Not Supported 00:16:08.187 SGL Bit Bucket Descriptor: Not Supported 00:16:08.187 SGL Metadata Pointer: Not Supported 00:16:08.187 Oversized SGL: Not Supported 00:16:08.187 SGL Metadata Address: Not Supported 00:16:08.187 SGL Offset: Not Supported 00:16:08.187 Transport SGL Data Block: Not Supported 00:16:08.187 Replay Protected Memory Block: Not Supported 00:16:08.187 00:16:08.187 Firmware Slot Information 00:16:08.187 ========================= 00:16:08.187 Active slot: 1 00:16:08.187 Slot 1 Firmware Revision: 25.01 00:16:08.187 00:16:08.187 00:16:08.187 Commands Supported and Effects 00:16:08.187 ============================== 00:16:08.187 Admin Commands 00:16:08.187 -------------- 00:16:08.187 Get Log Page (02h): Supported 00:16:08.187 Identify (06h): Supported 00:16:08.187 Abort (08h): Supported 00:16:08.187 Set Features (09h): Supported 00:16:08.187 Get Features (0Ah): Supported 00:16:08.187 Asynchronous Event Request (0Ch): Supported 00:16:08.187 Keep Alive (18h): Supported 00:16:08.187 I/O Commands 00:16:08.187 ------------ 00:16:08.187 Flush (00h): Supported LBA-Change 00:16:08.187 Write (01h): Supported LBA-Change 00:16:08.187 Read (02h): Supported 00:16:08.187 Compare (05h): Supported 00:16:08.187 Write Zeroes (08h): Supported LBA-Change 00:16:08.187 Dataset Management (09h): Supported LBA-Change 00:16:08.187 Copy (19h): Supported LBA-Change 00:16:08.187 00:16:08.187 Error Log 00:16:08.187 ========= 00:16:08.187 00:16:08.187 Arbitration 00:16:08.187 =========== 00:16:08.187 Arbitration Burst: 1 00:16:08.187 00:16:08.187 Power Management 00:16:08.187 ================ 00:16:08.187 Number of Power States: 1 00:16:08.187 Current Power State: Power State #0 00:16:08.187 Power State #0: 00:16:08.187 Max Power: 0.00 W 00:16:08.187 Non-Operational State: Operational 00:16:08.187 Entry Latency: Not Reported 00:16:08.187 Exit Latency: Not Reported 00:16:08.187 Relative Read Throughput: 0 00:16:08.187 Relative Read Latency: 0 00:16:08.187 Relative Write Throughput: 0 00:16:08.187 Relative Write Latency: 0 00:16:08.187 Idle Power: Not Reported 00:16:08.187 Active Power: Not Reported 00:16:08.187 Non-Operational Permissive Mode: Not Supported 00:16:08.187 00:16:08.187 Health Information 00:16:08.187 ================== 00:16:08.187 Critical Warnings: 00:16:08.187 Available Spare Space: OK 00:16:08.187 Temperature: OK 00:16:08.187 Device Reliability: OK 00:16:08.187 Read Only: No 00:16:08.187 Volatile Memory Backup: OK 00:16:08.187 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:08.187 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:08.187 Available Spare: 0% 00:16:08.187 Available Sp[2024-11-19 11:10:16.305341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:08.187 [2024-11-19 11:10:16.305353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:08.187 [2024-11-19 11:10:16.305381] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:08.187 [2024-11-19 11:10:16.305391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.187 [2024-11-19 11:10:16.305397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.187 [2024-11-19 11:10:16.305404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.187 [2024-11-19 11:10:16.305410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.187 [2024-11-19 11:10:16.306429] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:08.187 [2024-11-19 11:10:16.306440] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:08.188 [2024-11-19 11:10:16.307437] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:08.188 [2024-11-19 11:10:16.307479] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:08.188 [2024-11-19 11:10:16.307485] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:08.188 [2024-11-19 11:10:16.308449] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:08.188 [2024-11-19 11:10:16.308460] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:08.188 [2024-11-19 11:10:16.308519] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:08.188 [2024-11-19 11:10:16.312869] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:08.188 are Threshold: 0% 00:16:08.188 Life Percentage Used: 0% 00:16:08.188 Data Units Read: 0 00:16:08.188 Data Units Written: 0 00:16:08.188 Host Read Commands: 0 00:16:08.188 Host Write Commands: 0 00:16:08.188 Controller Busy Time: 0 minutes 00:16:08.188 Power Cycles: 0 00:16:08.188 Power On Hours: 0 hours 00:16:08.188 Unsafe Shutdowns: 0 00:16:08.188 Unrecoverable Media Errors: 0 00:16:08.188 Lifetime Error Log Entries: 0 00:16:08.188 Warning Temperature Time: 0 minutes 00:16:08.188 Critical Temperature Time: 0 minutes 00:16:08.188 00:16:08.188 Number of Queues 00:16:08.188 ================ 00:16:08.188 Number of I/O Submission Queues: 127 00:16:08.188 Number of I/O Completion Queues: 127 00:16:08.188 00:16:08.188 Active Namespaces 00:16:08.188 ================= 00:16:08.188 Namespace ID:1 00:16:08.188 Error Recovery Timeout: Unlimited 00:16:08.188 Command Set Identifier: NVM (00h) 00:16:08.188 Deallocate: Supported 00:16:08.188 Deallocated/Unwritten Error: Not Supported 00:16:08.188 Deallocated Read Value: Unknown 00:16:08.188 Deallocate in Write Zeroes: Not Supported 00:16:08.188 Deallocated Guard Field: 0xFFFF 00:16:08.188 Flush: Supported 00:16:08.188 Reservation: Supported 00:16:08.188 Namespace Sharing Capabilities: Multiple Controllers 00:16:08.188 Size (in LBAs): 131072 (0GiB) 00:16:08.188 Capacity (in LBAs): 131072 (0GiB) 00:16:08.188 Utilization (in LBAs): 131072 (0GiB) 00:16:08.188 NGUID: 036CA36D4B8643188D5773BD71B387FE 00:16:08.188 UUID: 036ca36d-4b86-4318-8d57-73bd71b387fe 00:16:08.188 Thin Provisioning: Not Supported 00:16:08.188 Per-NS Atomic Units: Yes 00:16:08.188 Atomic Boundary Size (Normal): 0 00:16:08.188 Atomic Boundary Size (PFail): 0 00:16:08.188 Atomic Boundary Offset: 0 00:16:08.188 Maximum Single Source Range Length: 65535 00:16:08.188 Maximum Copy Length: 65535 00:16:08.188 Maximum Source Range Count: 1 00:16:08.188 NGUID/EUI64 Never Reused: No 00:16:08.188 Namespace Write Protected: No 00:16:08.188 Number of LBA Formats: 1 00:16:08.188 Current LBA Format: LBA Format #00 00:16:08.188 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:08.188 00:16:08.188 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:08.188 [2024-11-19 11:10:16.511538] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:13.470 Initializing NVMe Controllers 00:16:13.470 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:13.470 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:13.470 Initialization complete. Launching workers. 00:16:13.470 ======================================================== 00:16:13.470 Latency(us) 00:16:13.470 Device Information : IOPS MiB/s Average min max 00:16:13.470 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40033.44 156.38 3197.00 846.99 9782.68 00:16:13.470 ======================================================== 00:16:13.470 Total : 40033.44 156.38 3197.00 846.99 9782.68 00:16:13.470 00:16:13.470 [2024-11-19 11:10:21.532302] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:13.470 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:13.470 [2024-11-19 11:10:21.722179] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:18.757 Initializing NVMe Controllers 00:16:18.757 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:18.757 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:18.757 Initialization complete. Launching workers. 00:16:18.757 ======================================================== 00:16:18.757 Latency(us) 00:16:18.757 Device Information : IOPS MiB/s Average min max 00:16:18.757 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.91 62.72 7977.65 4987.10 11970.61 00:16:18.757 ======================================================== 00:16:18.757 Total : 16055.91 62.72 7977.65 4987.10 11970.61 00:16:18.757 00:16:18.757 [2024-11-19 11:10:26.764954] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:18.757 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:18.757 [2024-11-19 11:10:26.980895] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:24.038 [2024-11-19 11:10:32.043035] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:24.038 Initializing NVMe Controllers 00:16:24.038 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.038 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.038 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:24.038 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:24.038 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:24.038 Initialization complete. Launching workers. 00:16:24.038 Starting thread on core 2 00:16:24.038 Starting thread on core 3 00:16:24.038 Starting thread on core 1 00:16:24.038 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:24.038 [2024-11-19 11:10:32.337104] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:27.340 [2024-11-19 11:10:35.405061] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:27.340 Initializing NVMe Controllers 00:16:27.340 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:27.340 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:27.340 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:27.340 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:27.340 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:27.340 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:27.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:27.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:27.340 Initialization complete. Launching workers. 00:16:27.340 Starting thread on core 1 with urgent priority queue 00:16:27.340 Starting thread on core 2 with urgent priority queue 00:16:27.340 Starting thread on core 3 with urgent priority queue 00:16:27.340 Starting thread on core 0 with urgent priority queue 00:16:27.340 SPDK bdev Controller (SPDK1 ) core 0: 10799.00 IO/s 9.26 secs/100000 ios 00:16:27.340 SPDK bdev Controller (SPDK1 ) core 1: 12176.00 IO/s 8.21 secs/100000 ios 00:16:27.340 SPDK bdev Controller (SPDK1 ) core 2: 9458.33 IO/s 10.57 secs/100000 ios 00:16:27.340 SPDK bdev Controller (SPDK1 ) core 3: 12553.33 IO/s 7.97 secs/100000 ios 00:16:27.340 ======================================================== 00:16:27.340 00:16:27.341 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:27.601 [2024-11-19 11:10:35.703304] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:27.601 Initializing NVMe Controllers 00:16:27.601 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:27.601 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:27.601 Namespace ID: 1 size: 0GB 00:16:27.601 Initialization complete. 00:16:27.601 INFO: using host memory buffer for IO 00:16:27.601 Hello world! 00:16:27.601 [2024-11-19 11:10:35.736497] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:27.601 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:27.862 [2024-11-19 11:10:36.035288] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:28.804 Initializing NVMe Controllers 00:16:28.804 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:28.804 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:28.804 Initialization complete. Launching workers. 00:16:28.804 submit (in ns) avg, min, max = 8542.6, 3927.5, 4075255.8 00:16:28.804 complete (in ns) avg, min, max = 17331.8, 2375.0, 3998439.2 00:16:28.804 00:16:28.804 Submit histogram 00:16:28.804 ================ 00:16:28.804 Range in us Cumulative Count 00:16:28.804 3.920 - 3.947: 0.5154% ( 97) 00:16:28.804 3.947 - 3.973: 4.2563% ( 704) 00:16:28.804 3.973 - 4.000: 12.7318% ( 1595) 00:16:28.804 4.000 - 4.027: 25.1448% ( 2336) 00:16:28.804 4.027 - 4.053: 38.0413% ( 2427) 00:16:28.804 4.053 - 4.080: 52.9784% ( 2811) 00:16:28.804 4.080 - 4.107: 70.5192% ( 3301) 00:16:28.804 4.107 - 4.133: 84.3031% ( 2594) 00:16:28.804 4.133 - 4.160: 93.0921% ( 1654) 00:16:28.804 4.160 - 4.187: 97.3803% ( 807) 00:16:28.804 4.187 - 4.213: 98.8947% ( 285) 00:16:28.804 4.213 - 4.240: 99.3198% ( 80) 00:16:28.804 4.240 - 4.267: 99.3889% ( 13) 00:16:28.804 4.267 - 4.293: 99.4208% ( 6) 00:16:28.804 4.293 - 4.320: 99.4261% ( 1) 00:16:28.804 4.427 - 4.453: 99.4314% ( 1) 00:16:28.804 4.507 - 4.533: 99.4421% ( 2) 00:16:28.804 4.560 - 4.587: 99.4474% ( 1) 00:16:28.804 4.587 - 4.613: 99.4527% ( 1) 00:16:28.804 4.613 - 4.640: 99.4633% ( 2) 00:16:28.804 4.720 - 4.747: 99.4686% ( 1) 00:16:28.804 4.933 - 4.960: 99.4739% ( 1) 00:16:28.804 4.987 - 5.013: 99.4792% ( 1) 00:16:28.804 5.040 - 5.067: 99.4846% ( 1) 00:16:28.804 5.093 - 5.120: 99.4899% ( 1) 00:16:28.804 5.147 - 5.173: 99.4952% ( 1) 00:16:28.804 5.493 - 5.520: 99.5005% ( 1) 00:16:28.804 5.573 - 5.600: 99.5058% ( 1) 00:16:28.804 5.600 - 5.627: 99.5218% ( 3) 00:16:28.804 5.653 - 5.680: 99.5324% ( 2) 00:16:28.804 5.760 - 5.787: 99.5377% ( 1) 00:16:28.804 5.787 - 5.813: 99.5483% ( 2) 00:16:28.804 5.867 - 5.893: 99.5536% ( 1) 00:16:28.804 5.920 - 5.947: 99.5643% ( 2) 00:16:28.804 5.947 - 5.973: 99.5696% ( 1) 00:16:28.804 5.973 - 6.000: 99.5855% ( 3) 00:16:28.804 6.000 - 6.027: 99.5908% ( 1) 00:16:28.804 6.027 - 6.053: 99.5962% ( 1) 00:16:28.804 6.053 - 6.080: 99.6121% ( 3) 00:16:28.804 6.080 - 6.107: 99.6227% ( 2) 00:16:28.804 6.107 - 6.133: 99.6280% ( 1) 00:16:28.804 6.133 - 6.160: 99.6440% ( 3) 00:16:28.804 6.160 - 6.187: 99.6599% ( 3) 00:16:28.804 6.213 - 6.240: 99.6652% ( 1) 00:16:28.804 6.240 - 6.267: 99.6812% ( 3) 00:16:28.804 6.267 - 6.293: 99.6865% ( 1) 00:16:28.804 6.293 - 6.320: 99.6918% ( 1) 00:16:28.804 6.320 - 6.347: 99.7024% ( 2) 00:16:28.804 6.347 - 6.373: 99.7077% ( 1) 00:16:28.804 6.373 - 6.400: 99.7237% ( 3) 00:16:28.804 6.400 - 6.427: 99.7290% ( 1) 00:16:28.804 6.427 - 6.453: 99.7343% ( 1) 00:16:28.804 6.480 - 6.507: 99.7609% ( 5) 00:16:28.804 6.507 - 6.533: 99.7768% ( 3) 00:16:28.804 6.533 - 6.560: 99.7821% ( 1) 00:16:28.804 6.560 - 6.587: 99.7874% ( 1) 00:16:28.804 6.587 - 6.613: 99.7981% ( 2) 00:16:28.804 6.640 - 6.667: 99.8087% ( 2) 00:16:28.804 6.720 - 6.747: 99.8140% ( 1) 00:16:28.804 6.747 - 6.773: 99.8193% ( 1) 00:16:28.804 6.800 - 6.827: 99.8300% ( 2) 00:16:28.804 6.827 - 6.880: 99.8353% ( 1) 00:16:28.804 6.880 - 6.933: 99.8406% ( 1) 00:16:28.804 6.933 - 6.987: 99.8459% ( 1) 00:16:28.804 6.987 - 7.040: 99.8512% ( 1) 00:16:28.804 7.307 - 7.360: 99.8565% ( 1) 00:16:28.804 7.360 - 7.413: 99.8618% ( 1) 00:16:28.804 7.413 - 7.467: 99.8672% ( 1) 00:16:28.804 7.787 - 7.840: 99.8725% ( 1) 00:16:28.804 8.053 - 8.107: 99.8778% ( 1) 00:16:28.804 9.547 - 9.600: 99.8831% ( 1) 00:16:28.804 10.400 - 10.453: 99.8884% ( 1) 00:16:28.804 3986.773 - 4014.080: 99.9947% ( 20) 00:16:28.804 4068.693 - 4096.000: 100.0000% ( 1) 00:16:28.804 00:16:28.804 Complete histogram 00:16:28.804 ================== 00:16:28.804 Range in us Cumulative Count 00:16:28.804 2.373 - 2.387: 0.0106% ( 2) 00:16:28.804 2.387 - 2.400: 0.4145% ( 76) 00:16:28.804 2.400 - 2.413: 1.1797% ( 144) 00:16:28.804 2.413 - [2024-11-19 11:10:37.055827] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:28.804 2.427: 1.3178% ( 26) 00:16:28.804 2.427 - 2.440: 1.4719% ( 29) 00:16:28.804 2.440 - 2.453: 30.6658% ( 5494) 00:16:28.804 2.453 - 2.467: 60.6621% ( 5645) 00:16:28.804 2.467 - 2.480: 69.1588% ( 1599) 00:16:28.804 2.480 - 2.493: 76.7628% ( 1431) 00:16:28.804 2.493 - 2.507: 80.4878% ( 701) 00:16:28.804 2.507 - 2.520: 82.9056% ( 455) 00:16:28.804 2.520 - 2.533: 89.3671% ( 1216) 00:16:28.804 2.533 - 2.547: 95.1963% ( 1097) 00:16:28.804 2.547 - 2.560: 97.3909% ( 413) 00:16:28.804 2.560 - 2.573: 98.6397% ( 235) 00:16:28.804 2.573 - 2.587: 99.2561% ( 116) 00:16:28.804 2.587 - 2.600: 99.4049% ( 28) 00:16:28.804 2.600 - 2.613: 99.4367% ( 6) 00:16:28.804 2.613 - 2.627: 99.4474% ( 2) 00:16:28.804 2.800 - 2.813: 99.4527% ( 1) 00:16:28.804 3.027 - 3.040: 99.4580% ( 1) 00:16:28.804 4.213 - 4.240: 99.4686% ( 2) 00:16:28.804 4.320 - 4.347: 99.4739% ( 1) 00:16:28.804 4.400 - 4.427: 99.4792% ( 1) 00:16:28.804 4.427 - 4.453: 99.4899% ( 2) 00:16:28.804 4.480 - 4.507: 99.4952% ( 1) 00:16:28.804 4.640 - 4.667: 99.5164% ( 4) 00:16:28.804 4.667 - 4.693: 99.5218% ( 1) 00:16:28.804 4.693 - 4.720: 99.5271% ( 1) 00:16:28.804 4.720 - 4.747: 99.5377% ( 2) 00:16:28.804 4.747 - 4.773: 99.5430% ( 1) 00:16:28.804 4.827 - 4.853: 99.5483% ( 1) 00:16:28.804 5.013 - 5.040: 99.5536% ( 1) 00:16:28.804 5.067 - 5.093: 99.5590% ( 1) 00:16:28.804 5.120 - 5.147: 99.5643% ( 1) 00:16:28.804 5.227 - 5.253: 99.5696% ( 1) 00:16:28.804 5.547 - 5.573: 99.5749% ( 1) 00:16:28.804 5.600 - 5.627: 99.5802% ( 1) 00:16:28.804 5.627 - 5.653: 99.5855% ( 1) 00:16:28.804 5.707 - 5.733: 99.5908% ( 1) 00:16:28.804 5.733 - 5.760: 99.5962% ( 1) 00:16:28.804 5.973 - 6.000: 99.6015% ( 1) 00:16:28.804 7.093 - 7.147: 99.6068% ( 1) 00:16:28.804 9.707 - 9.760: 99.6121% ( 1) 00:16:28.804 10.027 - 10.080: 99.6174% ( 1) 00:16:28.804 11.733 - 11.787: 99.6227% ( 1) 00:16:28.804 46.933 - 47.147: 99.6280% ( 1) 00:16:28.804 3850.240 - 3877.547: 99.6333% ( 1) 00:16:28.804 3986.773 - 4014.080: 100.0000% ( 69) 00:16:28.804 00:16:28.804 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:28.804 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:28.804 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:28.804 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:28.804 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:29.064 [ 00:16:29.064 { 00:16:29.064 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:29.064 "subtype": "Discovery", 00:16:29.064 "listen_addresses": [], 00:16:29.064 "allow_any_host": true, 00:16:29.064 "hosts": [] 00:16:29.064 }, 00:16:29.064 { 00:16:29.064 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:29.064 "subtype": "NVMe", 00:16:29.064 "listen_addresses": [ 00:16:29.064 { 00:16:29.064 "trtype": "VFIOUSER", 00:16:29.064 "adrfam": "IPv4", 00:16:29.064 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:29.064 "trsvcid": "0" 00:16:29.064 } 00:16:29.064 ], 00:16:29.064 "allow_any_host": true, 00:16:29.064 "hosts": [], 00:16:29.064 "serial_number": "SPDK1", 00:16:29.064 "model_number": "SPDK bdev Controller", 00:16:29.064 "max_namespaces": 32, 00:16:29.064 "min_cntlid": 1, 00:16:29.064 "max_cntlid": 65519, 00:16:29.064 "namespaces": [ 00:16:29.064 { 00:16:29.064 "nsid": 1, 00:16:29.064 "bdev_name": "Malloc1", 00:16:29.064 "name": "Malloc1", 00:16:29.064 "nguid": "036CA36D4B8643188D5773BD71B387FE", 00:16:29.064 "uuid": "036ca36d-4b86-4318-8d57-73bd71b387fe" 00:16:29.064 } 00:16:29.064 ] 00:16:29.064 }, 00:16:29.064 { 00:16:29.064 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:29.064 "subtype": "NVMe", 00:16:29.064 "listen_addresses": [ 00:16:29.064 { 00:16:29.064 "trtype": "VFIOUSER", 00:16:29.064 "adrfam": "IPv4", 00:16:29.064 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:29.064 "trsvcid": "0" 00:16:29.064 } 00:16:29.064 ], 00:16:29.064 "allow_any_host": true, 00:16:29.064 "hosts": [], 00:16:29.064 "serial_number": "SPDK2", 00:16:29.064 "model_number": "SPDK bdev Controller", 00:16:29.064 "max_namespaces": 32, 00:16:29.064 "min_cntlid": 1, 00:16:29.064 "max_cntlid": 65519, 00:16:29.064 "namespaces": [ 00:16:29.064 { 00:16:29.064 "nsid": 1, 00:16:29.064 "bdev_name": "Malloc2", 00:16:29.064 "name": "Malloc2", 00:16:29.064 "nguid": "B88304017D274B30B5308DAF3A1FD01D", 00:16:29.064 "uuid": "b8830401-7d27-4b30-b530-8daf3a1fd01d" 00:16:29.064 } 00:16:29.064 ] 00:16:29.064 } 00:16:29.064 ] 00:16:29.064 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:29.064 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4088127 00:16:29.064 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:29.065 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:29.065 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:29.065 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:29.065 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:29.065 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:29.065 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:29.065 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:29.325 Malloc3 00:16:29.325 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:29.325 [2024-11-19 11:10:37.507855] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.325 [2024-11-19 11:10:37.657887] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:29.586 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:29.586 Asynchronous Event Request test 00:16:29.586 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:29.586 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:29.586 Registering asynchronous event callbacks... 00:16:29.586 Starting namespace attribute notice tests for all controllers... 00:16:29.586 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:29.586 aer_cb - Changed Namespace 00:16:29.586 Cleaning up... 00:16:29.586 [ 00:16:29.586 { 00:16:29.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:29.586 "subtype": "Discovery", 00:16:29.586 "listen_addresses": [], 00:16:29.586 "allow_any_host": true, 00:16:29.586 "hosts": [] 00:16:29.586 }, 00:16:29.586 { 00:16:29.586 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:29.586 "subtype": "NVMe", 00:16:29.586 "listen_addresses": [ 00:16:29.586 { 00:16:29.586 "trtype": "VFIOUSER", 00:16:29.586 "adrfam": "IPv4", 00:16:29.586 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:29.586 "trsvcid": "0" 00:16:29.586 } 00:16:29.586 ], 00:16:29.586 "allow_any_host": true, 00:16:29.586 "hosts": [], 00:16:29.586 "serial_number": "SPDK1", 00:16:29.586 "model_number": "SPDK bdev Controller", 00:16:29.586 "max_namespaces": 32, 00:16:29.586 "min_cntlid": 1, 00:16:29.586 "max_cntlid": 65519, 00:16:29.586 "namespaces": [ 00:16:29.586 { 00:16:29.586 "nsid": 1, 00:16:29.586 "bdev_name": "Malloc1", 00:16:29.586 "name": "Malloc1", 00:16:29.586 "nguid": "036CA36D4B8643188D5773BD71B387FE", 00:16:29.586 "uuid": "036ca36d-4b86-4318-8d57-73bd71b387fe" 00:16:29.586 }, 00:16:29.586 { 00:16:29.586 "nsid": 2, 00:16:29.586 "bdev_name": "Malloc3", 00:16:29.586 "name": "Malloc3", 00:16:29.586 "nguid": "6BFD2C9B584C46FF92D133808709F4C0", 00:16:29.586 "uuid": "6bfd2c9b-584c-46ff-92d1-33808709f4c0" 00:16:29.586 } 00:16:29.586 ] 00:16:29.586 }, 00:16:29.586 { 00:16:29.586 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:29.586 "subtype": "NVMe", 00:16:29.586 "listen_addresses": [ 00:16:29.586 { 00:16:29.586 "trtype": "VFIOUSER", 00:16:29.586 "adrfam": "IPv4", 00:16:29.586 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:29.586 "trsvcid": "0" 00:16:29.586 } 00:16:29.586 ], 00:16:29.586 "allow_any_host": true, 00:16:29.586 "hosts": [], 00:16:29.586 "serial_number": "SPDK2", 00:16:29.586 "model_number": "SPDK bdev Controller", 00:16:29.586 "max_namespaces": 32, 00:16:29.586 "min_cntlid": 1, 00:16:29.586 "max_cntlid": 65519, 00:16:29.586 "namespaces": [ 00:16:29.586 { 00:16:29.586 "nsid": 1, 00:16:29.586 "bdev_name": "Malloc2", 00:16:29.586 "name": "Malloc2", 00:16:29.586 "nguid": "B88304017D274B30B5308DAF3A1FD01D", 00:16:29.586 "uuid": "b8830401-7d27-4b30-b530-8daf3a1fd01d" 00:16:29.586 } 00:16:29.586 ] 00:16:29.586 } 00:16:29.586 ] 00:16:29.586 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4088127 00:16:29.586 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:29.586 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:29.586 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:29.586 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:29.586 [2024-11-19 11:10:37.896571] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:16:29.586 [2024-11-19 11:10:37.896617] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4088355 ] 00:16:29.849 [2024-11-19 11:10:37.951934] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:29.849 [2024-11-19 11:10:37.958074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:29.849 [2024-11-19 11:10:37.958097] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc0be50c000 00:16:29.849 [2024-11-19 11:10:37.959076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.849 [2024-11-19 11:10:37.960079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.849 [2024-11-19 11:10:37.961086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.849 [2024-11-19 11:10:37.962092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:29.849 [2024-11-19 11:10:37.963095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:29.849 [2024-11-19 11:10:37.964099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.849 [2024-11-19 11:10:37.965101] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:29.849 [2024-11-19 11:10:37.966105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:29.849 [2024-11-19 11:10:37.967110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:29.849 [2024-11-19 11:10:37.967120] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc0be501000 00:16:29.849 [2024-11-19 11:10:37.968445] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:29.849 [2024-11-19 11:10:37.988019] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:29.849 [2024-11-19 11:10:37.988044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:29.849 [2024-11-19 11:10:37.990093] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:29.849 [2024-11-19 11:10:37.990141] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:29.849 [2024-11-19 11:10:37.990227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:29.849 [2024-11-19 11:10:37.990240] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:29.849 [2024-11-19 11:10:37.990245] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:29.849 [2024-11-19 11:10:37.991098] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:29.849 [2024-11-19 11:10:37.991108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:29.849 [2024-11-19 11:10:37.991115] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:29.849 [2024-11-19 11:10:37.992101] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:29.849 [2024-11-19 11:10:37.992110] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:29.849 [2024-11-19 11:10:37.992118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:29.850 [2024-11-19 11:10:37.993108] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:29.850 [2024-11-19 11:10:37.993118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:29.850 [2024-11-19 11:10:37.994108] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:29.850 [2024-11-19 11:10:37.994117] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:29.850 [2024-11-19 11:10:37.994122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:29.850 [2024-11-19 11:10:37.994129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:29.850 [2024-11-19 11:10:37.994237] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:29.850 [2024-11-19 11:10:37.994242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:29.850 [2024-11-19 11:10:37.994247] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:29.850 [2024-11-19 11:10:37.995118] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:29.850 [2024-11-19 11:10:37.996121] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:29.850 [2024-11-19 11:10:37.997130] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:29.850 [2024-11-19 11:10:37.998131] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:29.850 [2024-11-19 11:10:37.998170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:29.850 [2024-11-19 11:10:37.999145] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:29.850 [2024-11-19 11:10:37.999154] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:29.850 [2024-11-19 11:10:37.999159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:37.999180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:29.850 [2024-11-19 11:10:37.999190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:37.999202] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:29.850 [2024-11-19 11:10:37.999207] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:29.850 [2024-11-19 11:10:37.999211] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.850 [2024-11-19 11:10:37.999221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:29.850 [2024-11-19 11:10:38.009876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:29.850 [2024-11-19 11:10:38.009888] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:29.850 [2024-11-19 11:10:38.009893] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:29.850 [2024-11-19 11:10:38.009898] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:29.850 [2024-11-19 11:10:38.009903] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:29.850 [2024-11-19 11:10:38.009910] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:29.850 [2024-11-19 11:10:38.009915] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:29.850 [2024-11-19 11:10:38.009920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.009929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.009939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:29.850 [2024-11-19 11:10:38.017867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:29.850 [2024-11-19 11:10:38.017879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.850 [2024-11-19 11:10:38.017888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.850 [2024-11-19 11:10:38.017897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.850 [2024-11-19 11:10:38.017905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.850 [2024-11-19 11:10:38.017910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.017917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.017926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:29.850 [2024-11-19 11:10:38.025869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:29.850 [2024-11-19 11:10:38.025879] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:29.850 [2024-11-19 11:10:38.025884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.025893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.025899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.025908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:29.850 [2024-11-19 11:10:38.033868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:29.850 [2024-11-19 11:10:38.033932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.033941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.033948] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:29.850 [2024-11-19 11:10:38.033953] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:29.850 [2024-11-19 11:10:38.033956] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.850 [2024-11-19 11:10:38.033963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:29.850 [2024-11-19 11:10:38.041867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:29.850 [2024-11-19 11:10:38.041877] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:29.850 [2024-11-19 11:10:38.041891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.041899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.041907] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:29.850 [2024-11-19 11:10:38.041911] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:29.850 [2024-11-19 11:10:38.041915] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.850 [2024-11-19 11:10:38.041921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:29.850 [2024-11-19 11:10:38.049867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:29.850 [2024-11-19 11:10:38.049881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.049889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:29.850 [2024-11-19 11:10:38.049897] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:29.851 [2024-11-19 11:10:38.049901] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:29.851 [2024-11-19 11:10:38.049905] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.851 [2024-11-19 11:10:38.049911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:29.851 [2024-11-19 11:10:38.057868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:29.851 [2024-11-19 11:10:38.057880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:29.851 [2024-11-19 11:10:38.057887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:29.851 [2024-11-19 11:10:38.057895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:29.851 [2024-11-19 11:10:38.057901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:29.851 [2024-11-19 11:10:38.057906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:29.851 [2024-11-19 11:10:38.057912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:29.851 [2024-11-19 11:10:38.057917] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:29.851 [2024-11-19 11:10:38.057921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:29.851 [2024-11-19 11:10:38.057927] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:29.851 [2024-11-19 11:10:38.057943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:29.851 [2024-11-19 11:10:38.065870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:29.851 [2024-11-19 11:10:38.065884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:29.851 [2024-11-19 11:10:38.073868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:29.851 [2024-11-19 11:10:38.073881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:29.851 [2024-11-19 11:10:38.081869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:29.851 [2024-11-19 11:10:38.081882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:29.851 [2024-11-19 11:10:38.089869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:29.851 [2024-11-19 11:10:38.089884] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:29.851 [2024-11-19 11:10:38.089889] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:29.851 [2024-11-19 11:10:38.089893] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:29.851 [2024-11-19 11:10:38.089897] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:29.851 [2024-11-19 11:10:38.089900] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:29.851 [2024-11-19 11:10:38.089907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:29.851 [2024-11-19 11:10:38.089915] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:29.851 [2024-11-19 11:10:38.089919] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:29.851 [2024-11-19 11:10:38.089922] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.851 [2024-11-19 11:10:38.089930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:29.851 [2024-11-19 11:10:38.089938] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:29.851 [2024-11-19 11:10:38.089942] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:29.851 [2024-11-19 11:10:38.089946] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.851 [2024-11-19 11:10:38.089952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:29.851 [2024-11-19 11:10:38.089959] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:29.851 [2024-11-19 11:10:38.089964] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:29.851 [2024-11-19 11:10:38.089967] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:29.851 [2024-11-19 11:10:38.089973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:29.851 [2024-11-19 11:10:38.097868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:29.851 [2024-11-19 11:10:38.097886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:29.851 [2024-11-19 11:10:38.097896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:29.851 [2024-11-19 11:10:38.097904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:29.851 ===================================================== 00:16:29.851 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:29.851 ===================================================== 00:16:29.851 Controller Capabilities/Features 00:16:29.851 ================================ 00:16:29.851 Vendor ID: 4e58 00:16:29.851 Subsystem Vendor ID: 4e58 00:16:29.851 Serial Number: SPDK2 00:16:29.851 Model Number: SPDK bdev Controller 00:16:29.851 Firmware Version: 25.01 00:16:29.851 Recommended Arb Burst: 6 00:16:29.851 IEEE OUI Identifier: 8d 6b 50 00:16:29.851 Multi-path I/O 00:16:29.851 May have multiple subsystem ports: Yes 00:16:29.851 May have multiple controllers: Yes 00:16:29.851 Associated with SR-IOV VF: No 00:16:29.851 Max Data Transfer Size: 131072 00:16:29.851 Max Number of Namespaces: 32 00:16:29.851 Max Number of I/O Queues: 127 00:16:29.851 NVMe Specification Version (VS): 1.3 00:16:29.851 NVMe Specification Version (Identify): 1.3 00:16:29.851 Maximum Queue Entries: 256 00:16:29.851 Contiguous Queues Required: Yes 00:16:29.851 Arbitration Mechanisms Supported 00:16:29.851 Weighted Round Robin: Not Supported 00:16:29.851 Vendor Specific: Not Supported 00:16:29.851 Reset Timeout: 15000 ms 00:16:29.851 Doorbell Stride: 4 bytes 00:16:29.851 NVM Subsystem Reset: Not Supported 00:16:29.851 Command Sets Supported 00:16:29.851 NVM Command Set: Supported 00:16:29.851 Boot Partition: Not Supported 00:16:29.851 Memory Page Size Minimum: 4096 bytes 00:16:29.851 Memory Page Size Maximum: 4096 bytes 00:16:29.851 Persistent Memory Region: Not Supported 00:16:29.851 Optional Asynchronous Events Supported 00:16:29.851 Namespace Attribute Notices: Supported 00:16:29.851 Firmware Activation Notices: Not Supported 00:16:29.851 ANA Change Notices: Not Supported 00:16:29.851 PLE Aggregate Log Change Notices: Not Supported 00:16:29.851 LBA Status Info Alert Notices: Not Supported 00:16:29.851 EGE Aggregate Log Change Notices: Not Supported 00:16:29.851 Normal NVM Subsystem Shutdown event: Not Supported 00:16:29.851 Zone Descriptor Change Notices: Not Supported 00:16:29.851 Discovery Log Change Notices: Not Supported 00:16:29.851 Controller Attributes 00:16:29.851 128-bit Host Identifier: Supported 00:16:29.851 Non-Operational Permissive Mode: Not Supported 00:16:29.851 NVM Sets: Not Supported 00:16:29.851 Read Recovery Levels: Not Supported 00:16:29.851 Endurance Groups: Not Supported 00:16:29.851 Predictable Latency Mode: Not Supported 00:16:29.851 Traffic Based Keep ALive: Not Supported 00:16:29.851 Namespace Granularity: Not Supported 00:16:29.851 SQ Associations: Not Supported 00:16:29.851 UUID List: Not Supported 00:16:29.851 Multi-Domain Subsystem: Not Supported 00:16:29.852 Fixed Capacity Management: Not Supported 00:16:29.852 Variable Capacity Management: Not Supported 00:16:29.852 Delete Endurance Group: Not Supported 00:16:29.852 Delete NVM Set: Not Supported 00:16:29.852 Extended LBA Formats Supported: Not Supported 00:16:29.852 Flexible Data Placement Supported: Not Supported 00:16:29.852 00:16:29.852 Controller Memory Buffer Support 00:16:29.852 ================================ 00:16:29.852 Supported: No 00:16:29.852 00:16:29.852 Persistent Memory Region Support 00:16:29.852 ================================ 00:16:29.852 Supported: No 00:16:29.852 00:16:29.852 Admin Command Set Attributes 00:16:29.852 ============================ 00:16:29.852 Security Send/Receive: Not Supported 00:16:29.852 Format NVM: Not Supported 00:16:29.852 Firmware Activate/Download: Not Supported 00:16:29.852 Namespace Management: Not Supported 00:16:29.852 Device Self-Test: Not Supported 00:16:29.852 Directives: Not Supported 00:16:29.852 NVMe-MI: Not Supported 00:16:29.852 Virtualization Management: Not Supported 00:16:29.852 Doorbell Buffer Config: Not Supported 00:16:29.852 Get LBA Status Capability: Not Supported 00:16:29.852 Command & Feature Lockdown Capability: Not Supported 00:16:29.852 Abort Command Limit: 4 00:16:29.852 Async Event Request Limit: 4 00:16:29.852 Number of Firmware Slots: N/A 00:16:29.852 Firmware Slot 1 Read-Only: N/A 00:16:29.852 Firmware Activation Without Reset: N/A 00:16:29.852 Multiple Update Detection Support: N/A 00:16:29.852 Firmware Update Granularity: No Information Provided 00:16:29.852 Per-Namespace SMART Log: No 00:16:29.852 Asymmetric Namespace Access Log Page: Not Supported 00:16:29.852 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:29.852 Command Effects Log Page: Supported 00:16:29.852 Get Log Page Extended Data: Supported 00:16:29.852 Telemetry Log Pages: Not Supported 00:16:29.852 Persistent Event Log Pages: Not Supported 00:16:29.852 Supported Log Pages Log Page: May Support 00:16:29.852 Commands Supported & Effects Log Page: Not Supported 00:16:29.852 Feature Identifiers & Effects Log Page:May Support 00:16:29.852 NVMe-MI Commands & Effects Log Page: May Support 00:16:29.852 Data Area 4 for Telemetry Log: Not Supported 00:16:29.852 Error Log Page Entries Supported: 128 00:16:29.852 Keep Alive: Supported 00:16:29.852 Keep Alive Granularity: 10000 ms 00:16:29.852 00:16:29.852 NVM Command Set Attributes 00:16:29.852 ========================== 00:16:29.852 Submission Queue Entry Size 00:16:29.852 Max: 64 00:16:29.852 Min: 64 00:16:29.852 Completion Queue Entry Size 00:16:29.852 Max: 16 00:16:29.852 Min: 16 00:16:29.852 Number of Namespaces: 32 00:16:29.852 Compare Command: Supported 00:16:29.852 Write Uncorrectable Command: Not Supported 00:16:29.852 Dataset Management Command: Supported 00:16:29.852 Write Zeroes Command: Supported 00:16:29.852 Set Features Save Field: Not Supported 00:16:29.852 Reservations: Not Supported 00:16:29.852 Timestamp: Not Supported 00:16:29.852 Copy: Supported 00:16:29.852 Volatile Write Cache: Present 00:16:29.852 Atomic Write Unit (Normal): 1 00:16:29.852 Atomic Write Unit (PFail): 1 00:16:29.852 Atomic Compare & Write Unit: 1 00:16:29.852 Fused Compare & Write: Supported 00:16:29.852 Scatter-Gather List 00:16:29.852 SGL Command Set: Supported (Dword aligned) 00:16:29.852 SGL Keyed: Not Supported 00:16:29.852 SGL Bit Bucket Descriptor: Not Supported 00:16:29.852 SGL Metadata Pointer: Not Supported 00:16:29.852 Oversized SGL: Not Supported 00:16:29.852 SGL Metadata Address: Not Supported 00:16:29.852 SGL Offset: Not Supported 00:16:29.852 Transport SGL Data Block: Not Supported 00:16:29.852 Replay Protected Memory Block: Not Supported 00:16:29.852 00:16:29.852 Firmware Slot Information 00:16:29.852 ========================= 00:16:29.852 Active slot: 1 00:16:29.852 Slot 1 Firmware Revision: 25.01 00:16:29.852 00:16:29.852 00:16:29.852 Commands Supported and Effects 00:16:29.852 ============================== 00:16:29.852 Admin Commands 00:16:29.852 -------------- 00:16:29.852 Get Log Page (02h): Supported 00:16:29.852 Identify (06h): Supported 00:16:29.852 Abort (08h): Supported 00:16:29.852 Set Features (09h): Supported 00:16:29.852 Get Features (0Ah): Supported 00:16:29.852 Asynchronous Event Request (0Ch): Supported 00:16:29.852 Keep Alive (18h): Supported 00:16:29.852 I/O Commands 00:16:29.852 ------------ 00:16:29.852 Flush (00h): Supported LBA-Change 00:16:29.852 Write (01h): Supported LBA-Change 00:16:29.852 Read (02h): Supported 00:16:29.852 Compare (05h): Supported 00:16:29.852 Write Zeroes (08h): Supported LBA-Change 00:16:29.852 Dataset Management (09h): Supported LBA-Change 00:16:29.852 Copy (19h): Supported LBA-Change 00:16:29.852 00:16:29.852 Error Log 00:16:29.852 ========= 00:16:29.852 00:16:29.852 Arbitration 00:16:29.852 =========== 00:16:29.852 Arbitration Burst: 1 00:16:29.852 00:16:29.852 Power Management 00:16:29.852 ================ 00:16:29.852 Number of Power States: 1 00:16:29.852 Current Power State: Power State #0 00:16:29.852 Power State #0: 00:16:29.852 Max Power: 0.00 W 00:16:29.852 Non-Operational State: Operational 00:16:29.852 Entry Latency: Not Reported 00:16:29.852 Exit Latency: Not Reported 00:16:29.852 Relative Read Throughput: 0 00:16:29.852 Relative Read Latency: 0 00:16:29.852 Relative Write Throughput: 0 00:16:29.852 Relative Write Latency: 0 00:16:29.852 Idle Power: Not Reported 00:16:29.852 Active Power: Not Reported 00:16:29.852 Non-Operational Permissive Mode: Not Supported 00:16:29.852 00:16:29.852 Health Information 00:16:29.852 ================== 00:16:29.852 Critical Warnings: 00:16:29.852 Available Spare Space: OK 00:16:29.852 Temperature: OK 00:16:29.852 Device Reliability: OK 00:16:29.852 Read Only: No 00:16:29.852 Volatile Memory Backup: OK 00:16:29.852 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:29.852 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:29.852 Available Spare: 0% 00:16:29.852 Available Sp[2024-11-19 11:10:38.098007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:29.852 [2024-11-19 11:10:38.105866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:29.852 [2024-11-19 11:10:38.105897] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:29.852 [2024-11-19 11:10:38.105907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.852 [2024-11-19 11:10:38.105913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.852 [2024-11-19 11:10:38.105920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.852 [2024-11-19 11:10:38.105927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.852 [2024-11-19 11:10:38.105967] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:29.852 [2024-11-19 11:10:38.105978] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:29.852 [2024-11-19 11:10:38.106970] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.853 [2024-11-19 11:10:38.107020] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:29.853 [2024-11-19 11:10:38.107027] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:29.853 [2024-11-19 11:10:38.107972] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:29.853 [2024-11-19 11:10:38.107984] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:29.853 [2024-11-19 11:10:38.108037] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:29.853 [2024-11-19 11:10:38.109411] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:29.853 are Threshold: 0% 00:16:29.853 Life Percentage Used: 0% 00:16:29.853 Data Units Read: 0 00:16:29.853 Data Units Written: 0 00:16:29.853 Host Read Commands: 0 00:16:29.853 Host Write Commands: 0 00:16:29.853 Controller Busy Time: 0 minutes 00:16:29.853 Power Cycles: 0 00:16:29.853 Power On Hours: 0 hours 00:16:29.853 Unsafe Shutdowns: 0 00:16:29.853 Unrecoverable Media Errors: 0 00:16:29.853 Lifetime Error Log Entries: 0 00:16:29.853 Warning Temperature Time: 0 minutes 00:16:29.853 Critical Temperature Time: 0 minutes 00:16:29.853 00:16:29.853 Number of Queues 00:16:29.853 ================ 00:16:29.853 Number of I/O Submission Queues: 127 00:16:29.853 Number of I/O Completion Queues: 127 00:16:29.853 00:16:29.853 Active Namespaces 00:16:29.853 ================= 00:16:29.853 Namespace ID:1 00:16:29.853 Error Recovery Timeout: Unlimited 00:16:29.853 Command Set Identifier: NVM (00h) 00:16:29.853 Deallocate: Supported 00:16:29.853 Deallocated/Unwritten Error: Not Supported 00:16:29.853 Deallocated Read Value: Unknown 00:16:29.853 Deallocate in Write Zeroes: Not Supported 00:16:29.853 Deallocated Guard Field: 0xFFFF 00:16:29.853 Flush: Supported 00:16:29.853 Reservation: Supported 00:16:29.853 Namespace Sharing Capabilities: Multiple Controllers 00:16:29.853 Size (in LBAs): 131072 (0GiB) 00:16:29.853 Capacity (in LBAs): 131072 (0GiB) 00:16:29.853 Utilization (in LBAs): 131072 (0GiB) 00:16:29.853 NGUID: B88304017D274B30B5308DAF3A1FD01D 00:16:29.853 UUID: b8830401-7d27-4b30-b530-8daf3a1fd01d 00:16:29.853 Thin Provisioning: Not Supported 00:16:29.853 Per-NS Atomic Units: Yes 00:16:29.853 Atomic Boundary Size (Normal): 0 00:16:29.853 Atomic Boundary Size (PFail): 0 00:16:29.853 Atomic Boundary Offset: 0 00:16:29.853 Maximum Single Source Range Length: 65535 00:16:29.853 Maximum Copy Length: 65535 00:16:29.853 Maximum Source Range Count: 1 00:16:29.853 NGUID/EUI64 Never Reused: No 00:16:29.853 Namespace Write Protected: No 00:16:29.853 Number of LBA Formats: 1 00:16:29.853 Current LBA Format: LBA Format #00 00:16:29.853 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:29.853 00:16:29.853 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:30.113 [2024-11-19 11:10:38.312241] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:35.394 Initializing NVMe Controllers 00:16:35.394 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:35.394 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:35.394 Initialization complete. Launching workers. 00:16:35.394 ======================================================== 00:16:35.394 Latency(us) 00:16:35.394 Device Information : IOPS MiB/s Average min max 00:16:35.394 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39979.05 156.17 3201.54 846.43 8773.17 00:16:35.394 ======================================================== 00:16:35.394 Total : 39979.05 156.17 3201.54 846.43 8773.17 00:16:35.394 00:16:35.394 [2024-11-19 11:10:43.421062] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:35.394 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:35.394 [2024-11-19 11:10:43.613631] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:40.678 Initializing NVMe Controllers 00:16:40.678 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:40.678 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:40.678 Initialization complete. Launching workers. 00:16:40.678 ======================================================== 00:16:40.678 Latency(us) 00:16:40.678 Device Information : IOPS MiB/s Average min max 00:16:40.678 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35423.81 138.37 3612.98 1097.96 8647.92 00:16:40.678 ======================================================== 00:16:40.678 Total : 35423.81 138.37 3612.98 1097.96 8647.92 00:16:40.678 00:16:40.678 [2024-11-19 11:10:48.630745] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:40.678 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:40.678 [2024-11-19 11:10:48.838981] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.132 [2024-11-19 11:10:53.981951] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.132 Initializing NVMe Controllers 00:16:46.132 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.132 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:46.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:46.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:46.132 Initialization complete. Launching workers. 00:16:46.132 Starting thread on core 2 00:16:46.132 Starting thread on core 3 00:16:46.132 Starting thread on core 1 00:16:46.132 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:46.132 [2024-11-19 11:10:54.279284] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.436 [2024-11-19 11:10:57.363077] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.436 Initializing NVMe Controllers 00:16:49.436 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.436 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.436 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:49.436 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:49.436 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:49.436 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:49.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:49.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:49.436 Initialization complete. Launching workers. 00:16:49.436 Starting thread on core 1 with urgent priority queue 00:16:49.436 Starting thread on core 2 with urgent priority queue 00:16:49.436 Starting thread on core 3 with urgent priority queue 00:16:49.436 Starting thread on core 0 with urgent priority queue 00:16:49.436 SPDK bdev Controller (SPDK2 ) core 0: 14936.00 IO/s 6.70 secs/100000 ios 00:16:49.436 SPDK bdev Controller (SPDK2 ) core 1: 8319.67 IO/s 12.02 secs/100000 ios 00:16:49.436 SPDK bdev Controller (SPDK2 ) core 2: 7678.00 IO/s 13.02 secs/100000 ios 00:16:49.436 SPDK bdev Controller (SPDK2 ) core 3: 12494.33 IO/s 8.00 secs/100000 ios 00:16:49.436 ======================================================== 00:16:49.436 00:16:49.436 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:49.436 [2024-11-19 11:10:57.663288] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.436 Initializing NVMe Controllers 00:16:49.436 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.436 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.436 Namespace ID: 1 size: 0GB 00:16:49.436 Initialization complete. 00:16:49.436 INFO: using host memory buffer for IO 00:16:49.436 Hello world! 00:16:49.436 [2024-11-19 11:10:57.675361] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.436 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:49.697 [2024-11-19 11:10:57.970812] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:51.086 Initializing NVMe Controllers 00:16:51.086 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.086 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.086 Initialization complete. Launching workers. 00:16:51.086 submit (in ns) avg, min, max = 8101.0, 3945.0, 3999875.8 00:16:51.086 complete (in ns) avg, min, max = 18855.6, 2384.2, 3998982.5 00:16:51.086 00:16:51.086 Submit histogram 00:16:51.086 ================ 00:16:51.086 Range in us Cumulative Count 00:16:51.086 3.920 - 3.947: 0.0053% ( 1) 00:16:51.086 3.947 - 3.973: 1.4816% ( 281) 00:16:51.086 3.973 - 4.000: 5.7266% ( 808) 00:16:51.086 4.000 - 4.027: 13.7333% ( 1524) 00:16:51.086 4.027 - 4.053: 24.2251% ( 1997) 00:16:51.086 4.053 - 4.080: 36.1091% ( 2262) 00:16:51.086 4.080 - 4.107: 49.0964% ( 2472) 00:16:51.086 4.107 - 4.133: 65.7875% ( 3177) 00:16:51.086 4.133 - 4.160: 81.6434% ( 3018) 00:16:51.086 4.160 - 4.187: 91.6360% ( 1902) 00:16:51.086 4.187 - 4.213: 96.4695% ( 920) 00:16:51.086 4.213 - 4.240: 98.5605% ( 398) 00:16:51.086 4.240 - 4.267: 99.2014% ( 122) 00:16:51.086 4.267 - 4.293: 99.4326% ( 44) 00:16:51.086 4.293 - 4.320: 99.4641% ( 6) 00:16:51.086 4.347 - 4.373: 99.4746% ( 2) 00:16:51.086 4.373 - 4.400: 99.4799% ( 1) 00:16:51.086 4.613 - 4.640: 99.4851% ( 1) 00:16:51.086 4.667 - 4.693: 99.4904% ( 1) 00:16:51.086 4.827 - 4.853: 99.4956% ( 1) 00:16:51.086 4.880 - 4.907: 99.5009% ( 1) 00:16:51.086 4.987 - 5.013: 99.5167% ( 3) 00:16:51.086 5.147 - 5.173: 99.5272% ( 2) 00:16:51.086 5.333 - 5.360: 99.5377% ( 2) 00:16:51.086 5.787 - 5.813: 99.5429% ( 1) 00:16:51.086 5.840 - 5.867: 99.5482% ( 1) 00:16:51.086 5.893 - 5.920: 99.5534% ( 1) 00:16:51.086 5.973 - 6.000: 99.5639% ( 2) 00:16:51.086 6.000 - 6.027: 99.5744% ( 2) 00:16:51.086 6.053 - 6.080: 99.5797% ( 1) 00:16:51.086 6.107 - 6.133: 99.5850% ( 1) 00:16:51.086 6.133 - 6.160: 99.5902% ( 1) 00:16:51.086 6.160 - 6.187: 99.6007% ( 2) 00:16:51.086 6.187 - 6.213: 99.6112% ( 2) 00:16:51.086 6.213 - 6.240: 99.6165% ( 1) 00:16:51.086 6.320 - 6.347: 99.6217% ( 1) 00:16:51.086 6.347 - 6.373: 99.6322% ( 2) 00:16:51.086 6.373 - 6.400: 99.6375% ( 1) 00:16:51.086 6.400 - 6.427: 99.6427% ( 1) 00:16:51.086 6.427 - 6.453: 99.6480% ( 1) 00:16:51.086 6.453 - 6.480: 99.6533% ( 1) 00:16:51.086 6.533 - 6.560: 99.6690% ( 3) 00:16:51.086 6.640 - 6.667: 99.6795% ( 2) 00:16:51.086 6.667 - 6.693: 99.6848% ( 1) 00:16:51.086 6.693 - 6.720: 99.6900% ( 1) 00:16:51.086 6.720 - 6.747: 99.7005% ( 2) 00:16:51.086 6.800 - 6.827: 99.7216% ( 4) 00:16:51.086 6.880 - 6.933: 99.7321% ( 2) 00:16:51.086 6.933 - 6.987: 99.7531% ( 4) 00:16:51.086 6.987 - 7.040: 99.7793% ( 5) 00:16:51.086 7.147 - 7.200: 99.7898% ( 2) 00:16:51.086 7.200 - 7.253: 99.7951% ( 1) 00:16:51.086 7.253 - 7.307: 99.8056% ( 2) 00:16:51.086 7.307 - 7.360: 99.8161% ( 2) 00:16:51.086 7.413 - 7.467: 99.8214% ( 1) 00:16:51.086 7.573 - 7.627: 99.8319% ( 2) 00:16:51.086 7.733 - 7.787: 99.8476% ( 3) 00:16:51.086 7.947 - 8.000: 99.8529% ( 1) 00:16:51.086 8.053 - 8.107: 99.8581% ( 1) 00:16:51.086 8.107 - 8.160: 99.8634% ( 1) 00:16:51.086 8.747 - 8.800: 99.8739% ( 2) 00:16:51.086 9.493 - 9.547: 99.8792% ( 1) 00:16:51.086 10.080 - 10.133: 99.8844% ( 1) 00:16:51.086 10.293 - 10.347: 99.8897% ( 1) 00:16:51.086 11.680 - 11.733: 99.8949% ( 1) 00:16:51.086 14.080 - 14.187: 99.9002% ( 1) 00:16:51.086 3986.773 - 4014.080: 100.0000% ( 19) 00:16:51.086 00:16:51.086 Complete histogram 00:16:51.086 ================== 00:16:51.086 Range in us Cumulative Count 00:16:51.086 2.373 - 2.387: 0.0053% ( 1) 00:16:51.086 2.387 - 2.400: 0.7670% ( 145) 00:16:51.086 2.400 - 2.413: 1.0980% ( 63) 00:16:51.086 2.413 - 2.427: 1.2031% ( 20) 00:16:51.086 2.427 - 2.440: 1.3502% ( 28) 00:16:51.086 2.440 - 2.453: 26.8204% ( 4848) 00:16:51.086 2.453 - 2.467: 54.1767% ( 5207) 00:16:51.086 2.467 - 2.480: 64.3375% ( 1934) 00:16:51.086 2.480 - 2.493: 74.6349% ( 1960) 00:16:51.086 2.493 - 2.507: 79.6575% ( 956) 00:16:51.086 2.507 - [2024-11-19 11:10:59.062539] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:51.086 2.520: 82.1688% ( 478) 00:16:51.086 2.520 - 2.533: 86.6870% ( 860) 00:16:51.086 2.533 - 2.547: 92.8286% ( 1169) 00:16:51.086 2.547 - 2.560: 95.7287% ( 552) 00:16:51.086 2.560 - 2.573: 97.6778% ( 371) 00:16:51.086 2.573 - 2.587: 98.7864% ( 211) 00:16:51.086 2.587 - 2.600: 99.1594% ( 71) 00:16:51.086 2.600 - 2.613: 99.2855% ( 24) 00:16:51.086 2.613 - 2.627: 99.3118% ( 5) 00:16:51.086 2.627 - 2.640: 99.3223% ( 2) 00:16:51.086 2.653 - 2.667: 99.3328% ( 2) 00:16:51.086 3.027 - 3.040: 99.3380% ( 1) 00:16:51.086 3.067 - 3.080: 99.3433% ( 1) 00:16:51.086 4.507 - 4.533: 99.3485% ( 1) 00:16:51.086 4.587 - 4.613: 99.3590% ( 2) 00:16:51.086 4.720 - 4.747: 99.3643% ( 1) 00:16:51.086 4.747 - 4.773: 99.3748% ( 2) 00:16:51.086 4.773 - 4.800: 99.3801% ( 1) 00:16:51.086 4.827 - 4.853: 99.3853% ( 1) 00:16:51.086 4.853 - 4.880: 99.3906% ( 1) 00:16:51.086 4.880 - 4.907: 99.4011% ( 2) 00:16:51.086 4.933 - 4.960: 99.4063% ( 1) 00:16:51.086 4.960 - 4.987: 99.4116% ( 1) 00:16:51.086 5.040 - 5.067: 99.4326% ( 4) 00:16:51.086 5.093 - 5.120: 99.4378% ( 1) 00:16:51.086 5.120 - 5.147: 99.4484% ( 2) 00:16:51.086 5.147 - 5.173: 99.4589% ( 2) 00:16:51.086 5.200 - 5.227: 99.4641% ( 1) 00:16:51.086 5.227 - 5.253: 99.4746% ( 2) 00:16:51.086 5.253 - 5.280: 99.4799% ( 1) 00:16:51.086 5.360 - 5.387: 99.4851% ( 1) 00:16:51.086 5.413 - 5.440: 99.4904% ( 1) 00:16:51.086 5.440 - 5.467: 99.4956% ( 1) 00:16:51.086 5.520 - 5.547: 99.5061% ( 2) 00:16:51.086 5.760 - 5.787: 99.5114% ( 1) 00:16:51.086 5.787 - 5.813: 99.5167% ( 1) 00:16:51.086 5.840 - 5.867: 99.5219% ( 1) 00:16:51.086 5.920 - 5.947: 99.5272% ( 1) 00:16:51.086 5.973 - 6.000: 99.5324% ( 1) 00:16:51.086 6.000 - 6.027: 99.5377% ( 1) 00:16:51.086 6.053 - 6.080: 99.5429% ( 1) 00:16:51.086 6.107 - 6.133: 99.5482% ( 1) 00:16:51.086 6.213 - 6.240: 99.5534% ( 1) 00:16:51.086 6.267 - 6.293: 99.5587% ( 1) 00:16:51.086 6.293 - 6.320: 99.5639% ( 1) 00:16:51.086 6.320 - 6.347: 99.5692% ( 1) 00:16:51.087 6.347 - 6.373: 99.5744% ( 1) 00:16:51.087 6.933 - 6.987: 99.5797% ( 1) 00:16:51.087 7.307 - 7.360: 99.5850% ( 1) 00:16:51.087 7.573 - 7.627: 99.5902% ( 1) 00:16:51.087 3986.773 - 4014.080: 100.0000% ( 78) 00:16:51.087 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:51.087 [ 00:16:51.087 { 00:16:51.087 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:51.087 "subtype": "Discovery", 00:16:51.087 "listen_addresses": [], 00:16:51.087 "allow_any_host": true, 00:16:51.087 "hosts": [] 00:16:51.087 }, 00:16:51.087 { 00:16:51.087 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:51.087 "subtype": "NVMe", 00:16:51.087 "listen_addresses": [ 00:16:51.087 { 00:16:51.087 "trtype": "VFIOUSER", 00:16:51.087 "adrfam": "IPv4", 00:16:51.087 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:51.087 "trsvcid": "0" 00:16:51.087 } 00:16:51.087 ], 00:16:51.087 "allow_any_host": true, 00:16:51.087 "hosts": [], 00:16:51.087 "serial_number": "SPDK1", 00:16:51.087 "model_number": "SPDK bdev Controller", 00:16:51.087 "max_namespaces": 32, 00:16:51.087 "min_cntlid": 1, 00:16:51.087 "max_cntlid": 65519, 00:16:51.087 "namespaces": [ 00:16:51.087 { 00:16:51.087 "nsid": 1, 00:16:51.087 "bdev_name": "Malloc1", 00:16:51.087 "name": "Malloc1", 00:16:51.087 "nguid": "036CA36D4B8643188D5773BD71B387FE", 00:16:51.087 "uuid": "036ca36d-4b86-4318-8d57-73bd71b387fe" 00:16:51.087 }, 00:16:51.087 { 00:16:51.087 "nsid": 2, 00:16:51.087 "bdev_name": "Malloc3", 00:16:51.087 "name": "Malloc3", 00:16:51.087 "nguid": "6BFD2C9B584C46FF92D133808709F4C0", 00:16:51.087 "uuid": "6bfd2c9b-584c-46ff-92d1-33808709f4c0" 00:16:51.087 } 00:16:51.087 ] 00:16:51.087 }, 00:16:51.087 { 00:16:51.087 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:51.087 "subtype": "NVMe", 00:16:51.087 "listen_addresses": [ 00:16:51.087 { 00:16:51.087 "trtype": "VFIOUSER", 00:16:51.087 "adrfam": "IPv4", 00:16:51.087 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:51.087 "trsvcid": "0" 00:16:51.087 } 00:16:51.087 ], 00:16:51.087 "allow_any_host": true, 00:16:51.087 "hosts": [], 00:16:51.087 "serial_number": "SPDK2", 00:16:51.087 "model_number": "SPDK bdev Controller", 00:16:51.087 "max_namespaces": 32, 00:16:51.087 "min_cntlid": 1, 00:16:51.087 "max_cntlid": 65519, 00:16:51.087 "namespaces": [ 00:16:51.087 { 00:16:51.087 "nsid": 1, 00:16:51.087 "bdev_name": "Malloc2", 00:16:51.087 "name": "Malloc2", 00:16:51.087 "nguid": "B88304017D274B30B5308DAF3A1FD01D", 00:16:51.087 "uuid": "b8830401-7d27-4b30-b530-8daf3a1fd01d" 00:16:51.087 } 00:16:51.087 ] 00:16:51.087 } 00:16:51.087 ] 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4092396 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:51.087 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:51.348 Malloc4 00:16:51.348 [2024-11-19 11:10:59.491896] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:51.348 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:51.348 [2024-11-19 11:10:59.656021] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:51.348 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:51.610 Asynchronous Event Request test 00:16:51.610 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.610 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.610 Registering asynchronous event callbacks... 00:16:51.610 Starting namespace attribute notice tests for all controllers... 00:16:51.610 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:51.610 aer_cb - Changed Namespace 00:16:51.610 Cleaning up... 00:16:51.610 [ 00:16:51.610 { 00:16:51.610 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:51.610 "subtype": "Discovery", 00:16:51.610 "listen_addresses": [], 00:16:51.610 "allow_any_host": true, 00:16:51.610 "hosts": [] 00:16:51.610 }, 00:16:51.610 { 00:16:51.610 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:51.610 "subtype": "NVMe", 00:16:51.610 "listen_addresses": [ 00:16:51.610 { 00:16:51.610 "trtype": "VFIOUSER", 00:16:51.610 "adrfam": "IPv4", 00:16:51.610 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:51.610 "trsvcid": "0" 00:16:51.610 } 00:16:51.610 ], 00:16:51.610 "allow_any_host": true, 00:16:51.610 "hosts": [], 00:16:51.610 "serial_number": "SPDK1", 00:16:51.610 "model_number": "SPDK bdev Controller", 00:16:51.610 "max_namespaces": 32, 00:16:51.610 "min_cntlid": 1, 00:16:51.610 "max_cntlid": 65519, 00:16:51.610 "namespaces": [ 00:16:51.610 { 00:16:51.610 "nsid": 1, 00:16:51.610 "bdev_name": "Malloc1", 00:16:51.610 "name": "Malloc1", 00:16:51.610 "nguid": "036CA36D4B8643188D5773BD71B387FE", 00:16:51.610 "uuid": "036ca36d-4b86-4318-8d57-73bd71b387fe" 00:16:51.610 }, 00:16:51.610 { 00:16:51.610 "nsid": 2, 00:16:51.610 "bdev_name": "Malloc3", 00:16:51.610 "name": "Malloc3", 00:16:51.610 "nguid": "6BFD2C9B584C46FF92D133808709F4C0", 00:16:51.610 "uuid": "6bfd2c9b-584c-46ff-92d1-33808709f4c0" 00:16:51.610 } 00:16:51.610 ] 00:16:51.610 }, 00:16:51.610 { 00:16:51.610 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:51.610 "subtype": "NVMe", 00:16:51.610 "listen_addresses": [ 00:16:51.611 { 00:16:51.611 "trtype": "VFIOUSER", 00:16:51.611 "adrfam": "IPv4", 00:16:51.611 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:51.611 "trsvcid": "0" 00:16:51.611 } 00:16:51.611 ], 00:16:51.611 "allow_any_host": true, 00:16:51.611 "hosts": [], 00:16:51.611 "serial_number": "SPDK2", 00:16:51.611 "model_number": "SPDK bdev Controller", 00:16:51.611 "max_namespaces": 32, 00:16:51.611 "min_cntlid": 1, 00:16:51.611 "max_cntlid": 65519, 00:16:51.611 "namespaces": [ 00:16:51.611 { 00:16:51.611 "nsid": 1, 00:16:51.611 "bdev_name": "Malloc2", 00:16:51.611 "name": "Malloc2", 00:16:51.611 "nguid": "B88304017D274B30B5308DAF3A1FD01D", 00:16:51.611 "uuid": "b8830401-7d27-4b30-b530-8daf3a1fd01d" 00:16:51.611 }, 00:16:51.611 { 00:16:51.611 "nsid": 2, 00:16:51.611 "bdev_name": "Malloc4", 00:16:51.611 "name": "Malloc4", 00:16:51.611 "nguid": "247C4ACE5DD249BD8DA5A2D22FA59D0F", 00:16:51.611 "uuid": "247c4ace-5dd2-49bd-8da5-a2d22fa59d0f" 00:16:51.611 } 00:16:51.611 ] 00:16:51.611 } 00:16:51.611 ] 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4092396 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4083302 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 4083302 ']' 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 4083302 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4083302 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4083302' 00:16:51.611 killing process with pid 4083302 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 4083302 00:16:51.611 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 4083302 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4092715 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4092715' 00:16:51.872 Process pid: 4092715 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4092715 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 4092715 ']' 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.872 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:51.872 [2024-11-19 11:11:00.164076] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:51.872 [2024-11-19 11:11:00.165006] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:16:51.872 [2024-11-19 11:11:00.165048] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.134 [2024-11-19 11:11:00.244481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.134 [2024-11-19 11:11:00.279600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.134 [2024-11-19 11:11:00.279635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.134 [2024-11-19 11:11:00.279643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.134 [2024-11-19 11:11:00.279650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.134 [2024-11-19 11:11:00.279656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.134 [2024-11-19 11:11:00.281180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.134 [2024-11-19 11:11:00.281294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.134 [2024-11-19 11:11:00.281449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.134 [2024-11-19 11:11:00.281449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.134 [2024-11-19 11:11:00.336350] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:52.134 [2024-11-19 11:11:00.336625] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:52.134 [2024-11-19 11:11:00.337535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:52.134 [2024-11-19 11:11:00.337894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:52.134 [2024-11-19 11:11:00.338046] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:52.706 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.706 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:52.706 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:53.648 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:53.910 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:53.910 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:53.910 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:53.910 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:53.910 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:54.170 Malloc1 00:16:54.170 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:54.432 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:54.432 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:54.692 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:54.692 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:54.692 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:54.954 Malloc2 00:16:54.954 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:54.954 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:55.215 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4092715 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 4092715 ']' 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 4092715 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4092715 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4092715' 00:16:55.476 killing process with pid 4092715 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 4092715 00:16:55.476 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 4092715 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:55.737 00:16:55.737 real 0m51.509s 00:16:55.737 user 3m17.482s 00:16:55.737 sys 0m2.811s 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:55.737 ************************************ 00:16:55.737 END TEST nvmf_vfio_user 00:16:55.737 ************************************ 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.737 ************************************ 00:16:55.737 START TEST nvmf_vfio_user_nvme_compliance 00:16:55.737 ************************************ 00:16:55.737 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:55.737 * Looking for test storage... 00:16:55.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:55.737 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.737 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.737 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:56.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.000 --rc genhtml_branch_coverage=1 00:16:56.000 --rc genhtml_function_coverage=1 00:16:56.000 --rc genhtml_legend=1 00:16:56.000 --rc geninfo_all_blocks=1 00:16:56.000 --rc geninfo_unexecuted_blocks=1 00:16:56.000 00:16:56.000 ' 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:56.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.000 --rc genhtml_branch_coverage=1 00:16:56.000 --rc genhtml_function_coverage=1 00:16:56.000 --rc genhtml_legend=1 00:16:56.000 --rc geninfo_all_blocks=1 00:16:56.000 --rc geninfo_unexecuted_blocks=1 00:16:56.000 00:16:56.000 ' 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:56.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.000 --rc genhtml_branch_coverage=1 00:16:56.000 --rc genhtml_function_coverage=1 00:16:56.000 --rc genhtml_legend=1 00:16:56.000 --rc geninfo_all_blocks=1 00:16:56.000 --rc geninfo_unexecuted_blocks=1 00:16:56.000 00:16:56.000 ' 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:56.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.000 --rc genhtml_branch_coverage=1 00:16:56.000 --rc genhtml_function_coverage=1 00:16:56.000 --rc genhtml_legend=1 00:16:56.000 --rc geninfo_all_blocks=1 00:16:56.000 --rc geninfo_unexecuted_blocks=1 00:16:56.000 00:16:56.000 ' 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.000 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4093482 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4093482' 00:16:56.001 Process pid: 4093482 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4093482 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 4093482 ']' 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.001 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:56.001 [2024-11-19 11:11:04.241463] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:16:56.001 [2024-11-19 11:11:04.241534] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.001 [2024-11-19 11:11:04.325633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.263 [2024-11-19 11:11:04.366881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.263 [2024-11-19 11:11:04.366917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.263 [2024-11-19 11:11:04.366926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.264 [2024-11-19 11:11:04.366932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.264 [2024-11-19 11:11:04.366938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.264 [2024-11-19 11:11:04.368526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.264 [2024-11-19 11:11:04.368645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.264 [2024-11-19 11:11:04.368642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.836 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.836 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:56.836 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.779 malloc0 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.779 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:58.040 00:16:58.041 00:16:58.041 CUnit - A unit testing framework for C - Version 2.1-3 00:16:58.041 http://cunit.sourceforge.net/ 00:16:58.041 00:16:58.041 00:16:58.041 Suite: nvme_compliance 00:16:58.041 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 11:11:06.323312] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.041 [2024-11-19 11:11:06.324659] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:58.041 [2024-11-19 11:11:06.324670] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:58.041 [2024-11-19 11:11:06.324675] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:58.041 [2024-11-19 11:11:06.326335] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.041 passed 00:16:58.301 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 11:11:06.420962] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.301 [2024-11-19 11:11:06.423982] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.301 passed 00:16:58.301 Test: admin_identify_ns ...[2024-11-19 11:11:06.521109] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.301 [2024-11-19 11:11:06.580876] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:58.301 [2024-11-19 11:11:06.588885] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:58.301 [2024-11-19 11:11:06.609992] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.301 passed 00:16:58.562 Test: admin_get_features_mandatory_features ...[2024-11-19 11:11:06.705043] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.562 [2024-11-19 11:11:06.708058] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.562 passed 00:16:58.562 Test: admin_get_features_optional_features ...[2024-11-19 11:11:06.800590] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.562 [2024-11-19 11:11:06.803612] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.562 passed 00:16:58.562 Test: admin_set_features_number_of_queues ...[2024-11-19 11:11:06.897779] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.822 [2024-11-19 11:11:06.999977] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.822 passed 00:16:58.822 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 11:11:07.094012] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.822 [2024-11-19 11:11:07.097033] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.822 passed 00:16:59.083 Test: admin_get_log_page_with_lpo ...[2024-11-19 11:11:07.191105] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.083 [2024-11-19 11:11:07.258874] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:59.083 [2024-11-19 11:11:07.271913] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.083 passed 00:16:59.083 Test: fabric_property_get ...[2024-11-19 11:11:07.363981] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.083 [2024-11-19 11:11:07.365227] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:59.083 [2024-11-19 11:11:07.366996] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.083 passed 00:16:59.344 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 11:11:07.461623] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.344 [2024-11-19 11:11:07.462884] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:59.344 [2024-11-19 11:11:07.464651] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.344 passed 00:16:59.344 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 11:11:07.557797] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.344 [2024-11-19 11:11:07.642868] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:59.344 [2024-11-19 11:11:07.658867] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:59.344 [2024-11-19 11:11:07.663950] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.605 passed 00:16:59.605 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 11:11:07.754569] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.605 [2024-11-19 11:11:07.755821] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:59.605 [2024-11-19 11:11:07.757587] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.605 passed 00:16:59.605 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 11:11:07.851112] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.605 [2024-11-19 11:11:07.926873] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:59.605 [2024-11-19 11:11:07.950868] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:59.605 [2024-11-19 11:11:07.955957] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.866 passed 00:16:59.866 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 11:11:08.049969] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.866 [2024-11-19 11:11:08.051216] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:59.866 [2024-11-19 11:11:08.051238] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:59.866 [2024-11-19 11:11:08.052992] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.866 passed 00:16:59.866 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 11:11:08.145113] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.126 [2024-11-19 11:11:08.237872] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:00.126 [2024-11-19 11:11:08.245870] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:00.127 [2024-11-19 11:11:08.253872] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:00.127 [2024-11-19 11:11:08.261873] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:00.127 [2024-11-19 11:11:08.290944] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.127 passed 00:17:00.127 Test: admin_create_io_sq_verify_pc ...[2024-11-19 11:11:08.384577] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.127 [2024-11-19 11:11:08.399881] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:00.127 [2024-11-19 11:11:08.417736] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.127 passed 00:17:00.387 Test: admin_create_io_qp_max_qps ...[2024-11-19 11:11:08.512301] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.329 [2024-11-19 11:11:09.601875] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:01.902 [2024-11-19 11:11:09.977961] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.902 passed 00:17:01.902 Test: admin_create_io_sq_shared_cq ...[2024-11-19 11:11:10.073241] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.902 [2024-11-19 11:11:10.205869] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:01.902 [2024-11-19 11:11:10.242933] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.168 passed 00:17:02.168 00:17:02.168 Run Summary: Type Total Ran Passed Failed Inactive 00:17:02.168 suites 1 1 n/a 0 0 00:17:02.168 tests 18 18 18 0 0 00:17:02.168 asserts 360 360 360 0 n/a 00:17:02.168 00:17:02.168 Elapsed time = 1.640 seconds 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4093482 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 4093482 ']' 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 4093482 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4093482 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4093482' 00:17:02.168 killing process with pid 4093482 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 4093482 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 4093482 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:02.168 00:17:02.168 real 0m6.549s 00:17:02.168 user 0m18.536s 00:17:02.168 sys 0m0.558s 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.168 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.168 ************************************ 00:17:02.168 END TEST nvmf_vfio_user_nvme_compliance 00:17:02.168 ************************************ 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:02.432 ************************************ 00:17:02.432 START TEST nvmf_vfio_user_fuzz 00:17:02.432 ************************************ 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:02.432 * Looking for test storage... 00:17:02.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.432 --rc genhtml_branch_coverage=1 00:17:02.432 --rc genhtml_function_coverage=1 00:17:02.432 --rc genhtml_legend=1 00:17:02.432 --rc geninfo_all_blocks=1 00:17:02.432 --rc geninfo_unexecuted_blocks=1 00:17:02.432 00:17:02.432 ' 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.432 --rc genhtml_branch_coverage=1 00:17:02.432 --rc genhtml_function_coverage=1 00:17:02.432 --rc genhtml_legend=1 00:17:02.432 --rc geninfo_all_blocks=1 00:17:02.432 --rc geninfo_unexecuted_blocks=1 00:17:02.432 00:17:02.432 ' 00:17:02.432 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.433 --rc genhtml_branch_coverage=1 00:17:02.433 --rc genhtml_function_coverage=1 00:17:02.433 --rc genhtml_legend=1 00:17:02.433 --rc geninfo_all_blocks=1 00:17:02.433 --rc geninfo_unexecuted_blocks=1 00:17:02.433 00:17:02.433 ' 00:17:02.433 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.433 --rc genhtml_branch_coverage=1 00:17:02.433 --rc genhtml_function_coverage=1 00:17:02.433 --rc genhtml_legend=1 00:17:02.433 --rc geninfo_all_blocks=1 00:17:02.433 --rc geninfo_unexecuted_blocks=1 00:17:02.433 00:17:02.433 ' 00:17:02.433 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.694 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4094891 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4094891' 00:17:02.695 Process pid: 4094891 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4094891 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 4094891 ']' 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.695 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:03.638 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.638 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:03.638 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:04.581 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:04.581 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.582 malloc0 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:04.582 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:36.691 Fuzzing completed. Shutting down the fuzz application 00:17:36.691 00:17:36.691 Dumping successful admin opcodes: 00:17:36.691 8, 9, 10, 24, 00:17:36.691 Dumping successful io opcodes: 00:17:36.691 0, 00:17:36.691 NS: 0x20000081ef00 I/O qp, Total commands completed: 1215444, total successful commands: 4766, random_seed: 145379136 00:17:36.691 NS: 0x20000081ef00 admin qp, Total commands completed: 152744, total successful commands: 1233, random_seed: 64922560 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4094891 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 4094891 ']' 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 4094891 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4094891 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4094891' 00:17:36.691 killing process with pid 4094891 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 4094891 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 4094891 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:36.691 00:17:36.691 real 0m33.799s 00:17:36.691 user 0m40.085s 00:17:36.691 sys 0m24.580s 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:36.691 ************************************ 00:17:36.691 END TEST nvmf_vfio_user_fuzz 00:17:36.691 ************************************ 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.691 ************************************ 00:17:36.691 START TEST nvmf_auth_target 00:17:36.691 ************************************ 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:36.691 * Looking for test storage... 00:17:36.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.691 --rc genhtml_branch_coverage=1 00:17:36.691 --rc genhtml_function_coverage=1 00:17:36.691 --rc genhtml_legend=1 00:17:36.691 --rc geninfo_all_blocks=1 00:17:36.691 --rc geninfo_unexecuted_blocks=1 00:17:36.691 00:17:36.691 ' 00:17:36.691 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.691 --rc genhtml_branch_coverage=1 00:17:36.691 --rc genhtml_function_coverage=1 00:17:36.691 --rc genhtml_legend=1 00:17:36.691 --rc geninfo_all_blocks=1 00:17:36.691 --rc geninfo_unexecuted_blocks=1 00:17:36.691 00:17:36.691 ' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.692 --rc genhtml_branch_coverage=1 00:17:36.692 --rc genhtml_function_coverage=1 00:17:36.692 --rc genhtml_legend=1 00:17:36.692 --rc geninfo_all_blocks=1 00:17:36.692 --rc geninfo_unexecuted_blocks=1 00:17:36.692 00:17:36.692 ' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.692 --rc genhtml_branch_coverage=1 00:17:36.692 --rc genhtml_function_coverage=1 00:17:36.692 --rc genhtml_legend=1 00:17:36.692 --rc geninfo_all_blocks=1 00:17:36.692 --rc geninfo_unexecuted_blocks=1 00:17:36.692 00:17:36.692 ' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.692 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.853 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:44.854 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:44.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:44.854 Found net devices under 0000:31:00.0: cvl_0_0 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:44.854 Found net devices under 0000:31:00.1: cvl_0_1 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.854 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:44.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:17:44.854 00:17:44.854 --- 10.0.0.2 ping statistics --- 00:17:44.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.854 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:17:44.854 00:17:44.854 --- 10.0.0.1 ping statistics --- 00:17:44.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.854 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.854 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=4105570 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 4105570 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4105570 ']' 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.115 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=4105903 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a4f74eb21c4f5e3b6227d83f6ee57fc3597296ad3c441970 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rMH 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a4f74eb21c4f5e3b6227d83f6ee57fc3597296ad3c441970 0 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a4f74eb21c4f5e3b6227d83f6ee57fc3597296ad3c441970 0 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a4f74eb21c4f5e3b6227d83f6ee57fc3597296ad3c441970 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rMH 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rMH 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rMH 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:46.056 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6bda6fa35d8b0e803d7f00490892ae7f5718b47038a33dd347aa6347196eae2c 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7Wz 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6bda6fa35d8b0e803d7f00490892ae7f5718b47038a33dd347aa6347196eae2c 3 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6bda6fa35d8b0e803d7f00490892ae7f5718b47038a33dd347aa6347196eae2c 3 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6bda6fa35d8b0e803d7f00490892ae7f5718b47038a33dd347aa6347196eae2c 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7Wz 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7Wz 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.7Wz 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0e5fc6c931611e09b6c8381d9c1e0ba8 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5NW 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0e5fc6c931611e09b6c8381d9c1e0ba8 1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0e5fc6c931611e09b6c8381d9c1e0ba8 1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0e5fc6c931611e09b6c8381d9c1e0ba8 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5NW 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5NW 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.5NW 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3121ee4613533d9f7ac3abac964a7b86692abebc2d1cf54e 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5l1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3121ee4613533d9f7ac3abac964a7b86692abebc2d1cf54e 2 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3121ee4613533d9f7ac3abac964a7b86692abebc2d1cf54e 2 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3121ee4613533d9f7ac3abac964a7b86692abebc2d1cf54e 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5l1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5l1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.5l1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1426c44ed7f5c73c457e37f283eb3bf2d25809cd274d5ad9 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0A1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1426c44ed7f5c73c457e37f283eb3bf2d25809cd274d5ad9 2 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1426c44ed7f5c73c457e37f283eb3bf2d25809cd274d5ad9 2 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1426c44ed7f5c73c457e37f283eb3bf2d25809cd274d5ad9 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:46.057 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0A1 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0A1 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.0A1 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=be22491306977b6bfad96ed82d5de20f 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sfQ 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key be22491306977b6bfad96ed82d5de20f 1 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 be22491306977b6bfad96ed82d5de20f 1 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=be22491306977b6bfad96ed82d5de20f 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sfQ 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sfQ 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.sfQ 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c923da8bee3116d4c1ce2f64e528ae0b2d6adb5ea93f141d2b8c25ec03193c26 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ucz 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c923da8bee3116d4c1ce2f64e528ae0b2d6adb5ea93f141d2b8c25ec03193c26 3 00:17:46.318 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c923da8bee3116d4c1ce2f64e528ae0b2d6adb5ea93f141d2b8c25ec03193c26 3 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c923da8bee3116d4c1ce2f64e528ae0b2d6adb5ea93f141d2b8c25ec03193c26 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ucz 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ucz 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ucz 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 4105570 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4105570 ']' 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.319 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 4105903 /var/tmp/host.sock 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4105903 ']' 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:46.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.580 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.840 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.840 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:46.840 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rMH 00:17:46.840 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.840 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.840 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.840 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rMH 00:17:46.840 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rMH 00:17:46.840 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.7Wz ]] 00:17:46.840 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Wz 00:17:46.840 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.840 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.840 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.840 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Wz 00:17:46.840 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Wz 00:17:47.101 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:47.101 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5NW 00:17:47.101 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.101 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.101 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.101 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5NW 00:17:47.101 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5NW 00:17:47.361 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.5l1 ]] 00:17:47.361 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5l1 00:17:47.361 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.361 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.361 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.361 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5l1 00:17:47.361 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5l1 00:17:47.361 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:47.362 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0A1 00:17:47.362 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.362 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.362 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.362 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0A1 00:17:47.362 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0A1 00:17:47.624 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.sfQ ]] 00:17:47.624 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sfQ 00:17:47.624 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.624 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.624 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.624 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sfQ 00:17:47.624 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sfQ 00:17:47.884 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:47.884 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ucz 00:17:47.884 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.884 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.884 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.884 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ucz 00:17:47.884 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ucz 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.144 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.404 00:17:48.404 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.404 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.404 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.666 { 00:17:48.666 "cntlid": 1, 00:17:48.666 "qid": 0, 00:17:48.666 "state": "enabled", 00:17:48.666 "thread": "nvmf_tgt_poll_group_000", 00:17:48.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:48.666 "listen_address": { 00:17:48.666 "trtype": "TCP", 00:17:48.666 "adrfam": "IPv4", 00:17:48.666 "traddr": "10.0.0.2", 00:17:48.666 "trsvcid": "4420" 00:17:48.666 }, 00:17:48.666 "peer_address": { 00:17:48.666 "trtype": "TCP", 00:17:48.666 "adrfam": "IPv4", 00:17:48.666 "traddr": "10.0.0.1", 00:17:48.666 "trsvcid": "45032" 00:17:48.666 }, 00:17:48.666 "auth": { 00:17:48.666 "state": "completed", 00:17:48.666 "digest": "sha256", 00:17:48.666 "dhgroup": "null" 00:17:48.666 } 00:17:48.666 } 00:17:48.666 ]' 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.666 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.926 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:17:48.926 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:17:49.867 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.868 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.868 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.868 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.868 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.868 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.868 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.868 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.868 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.129 00:17:50.129 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.129 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.129 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.391 { 00:17:50.391 "cntlid": 3, 00:17:50.391 "qid": 0, 00:17:50.391 "state": "enabled", 00:17:50.391 "thread": "nvmf_tgt_poll_group_000", 00:17:50.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:50.391 "listen_address": { 00:17:50.391 "trtype": "TCP", 00:17:50.391 "adrfam": "IPv4", 00:17:50.391 "traddr": "10.0.0.2", 00:17:50.391 "trsvcid": "4420" 00:17:50.391 }, 00:17:50.391 "peer_address": { 00:17:50.391 "trtype": "TCP", 00:17:50.391 "adrfam": "IPv4", 00:17:50.391 "traddr": "10.0.0.1", 00:17:50.391 "trsvcid": "45050" 00:17:50.391 }, 00:17:50.391 "auth": { 00:17:50.391 "state": "completed", 00:17:50.391 "digest": "sha256", 00:17:50.391 "dhgroup": "null" 00:17:50.391 } 00:17:50.391 } 00:17:50.391 ]' 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.391 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.652 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:17:50.652 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:17:51.594 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.594 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.595 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.855 00:17:51.855 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.855 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.855 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.855 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.855 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.856 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.856 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.118 { 00:17:52.118 "cntlid": 5, 00:17:52.118 "qid": 0, 00:17:52.118 "state": "enabled", 00:17:52.118 "thread": "nvmf_tgt_poll_group_000", 00:17:52.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:52.118 "listen_address": { 00:17:52.118 "trtype": "TCP", 00:17:52.118 "adrfam": "IPv4", 00:17:52.118 "traddr": "10.0.0.2", 00:17:52.118 "trsvcid": "4420" 00:17:52.118 }, 00:17:52.118 "peer_address": { 00:17:52.118 "trtype": "TCP", 00:17:52.118 "adrfam": "IPv4", 00:17:52.118 "traddr": "10.0.0.1", 00:17:52.118 "trsvcid": "45074" 00:17:52.118 }, 00:17:52.118 "auth": { 00:17:52.118 "state": "completed", 00:17:52.118 "digest": "sha256", 00:17:52.118 "dhgroup": "null" 00:17:52.118 } 00:17:52.118 } 00:17:52.118 ]' 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.118 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.379 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:17:52.379 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:17:52.951 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.951 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.951 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.951 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.213 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.475 00:17:53.475 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.475 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.475 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.736 { 00:17:53.736 "cntlid": 7, 00:17:53.736 "qid": 0, 00:17:53.736 "state": "enabled", 00:17:53.736 "thread": "nvmf_tgt_poll_group_000", 00:17:53.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:53.736 "listen_address": { 00:17:53.736 "trtype": "TCP", 00:17:53.736 "adrfam": "IPv4", 00:17:53.736 "traddr": "10.0.0.2", 00:17:53.736 "trsvcid": "4420" 00:17:53.736 }, 00:17:53.736 "peer_address": { 00:17:53.736 "trtype": "TCP", 00:17:53.736 "adrfam": "IPv4", 00:17:53.736 "traddr": "10.0.0.1", 00:17:53.736 "trsvcid": "45094" 00:17:53.736 }, 00:17:53.736 "auth": { 00:17:53.736 "state": "completed", 00:17:53.736 "digest": "sha256", 00:17:53.736 "dhgroup": "null" 00:17:53.736 } 00:17:53.736 } 00:17:53.736 ]' 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:53.736 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.736 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.736 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.736 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.998 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:17:53.998 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.941 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.941 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.942 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.942 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.942 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.942 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.202 00:17:55.202 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.202 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.202 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.202 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.202 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.202 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.202 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.462 { 00:17:55.462 "cntlid": 9, 00:17:55.462 "qid": 0, 00:17:55.462 "state": "enabled", 00:17:55.462 "thread": "nvmf_tgt_poll_group_000", 00:17:55.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:55.462 "listen_address": { 00:17:55.462 "trtype": "TCP", 00:17:55.462 "adrfam": "IPv4", 00:17:55.462 "traddr": "10.0.0.2", 00:17:55.462 "trsvcid": "4420" 00:17:55.462 }, 00:17:55.462 "peer_address": { 00:17:55.462 "trtype": "TCP", 00:17:55.462 "adrfam": "IPv4", 00:17:55.462 "traddr": "10.0.0.1", 00:17:55.462 "trsvcid": "49424" 00:17:55.462 }, 00:17:55.462 "auth": { 00:17:55.462 "state": "completed", 00:17:55.462 "digest": "sha256", 00:17:55.462 "dhgroup": "ffdhe2048" 00:17:55.462 } 00:17:55.462 } 00:17:55.462 ]' 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.462 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.724 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:17:55.724 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:17:56.363 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.363 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.363 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.363 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.363 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.363 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.363 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.363 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.682 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.943 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.943 { 00:17:56.943 "cntlid": 11, 00:17:56.943 "qid": 0, 00:17:56.943 "state": "enabled", 00:17:56.943 "thread": "nvmf_tgt_poll_group_000", 00:17:56.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:56.943 "listen_address": { 00:17:56.943 "trtype": "TCP", 00:17:56.943 "adrfam": "IPv4", 00:17:56.943 "traddr": "10.0.0.2", 00:17:56.943 "trsvcid": "4420" 00:17:56.943 }, 00:17:56.943 "peer_address": { 00:17:56.943 "trtype": "TCP", 00:17:56.943 "adrfam": "IPv4", 00:17:56.943 "traddr": "10.0.0.1", 00:17:56.943 "trsvcid": "49438" 00:17:56.943 }, 00:17:56.943 "auth": { 00:17:56.943 "state": "completed", 00:17:56.943 "digest": "sha256", 00:17:56.943 "dhgroup": "ffdhe2048" 00:17:56.943 } 00:17:56.943 } 00:17:56.943 ]' 00:17:56.943 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.205 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.205 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.205 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.205 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.205 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.205 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.205 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.466 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:17:57.466 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:17:58.037 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.037 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.037 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.037 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.037 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.037 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.037 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.037 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.332 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.592 00:17:58.592 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.592 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.592 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.853 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.853 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.853 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.853 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.853 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.853 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.853 { 00:17:58.853 "cntlid": 13, 00:17:58.853 "qid": 0, 00:17:58.853 "state": "enabled", 00:17:58.853 "thread": "nvmf_tgt_poll_group_000", 00:17:58.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:58.853 "listen_address": { 00:17:58.853 "trtype": "TCP", 00:17:58.853 "adrfam": "IPv4", 00:17:58.853 "traddr": "10.0.0.2", 00:17:58.853 "trsvcid": "4420" 00:17:58.853 }, 00:17:58.853 "peer_address": { 00:17:58.853 "trtype": "TCP", 00:17:58.853 "adrfam": "IPv4", 00:17:58.853 "traddr": "10.0.0.1", 00:17:58.853 "trsvcid": "49462" 00:17:58.853 }, 00:17:58.853 "auth": { 00:17:58.853 "state": "completed", 00:17:58.853 "digest": "sha256", 00:17:58.853 "dhgroup": "ffdhe2048" 00:17:58.853 } 00:17:58.853 } 00:17:58.853 ]' 00:17:58.853 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.853 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.853 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.853 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.853 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.853 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.853 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.853 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.114 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:17:59.115 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.057 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.319 00:18:00.319 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.319 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.319 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.581 { 00:18:00.581 "cntlid": 15, 00:18:00.581 "qid": 0, 00:18:00.581 "state": "enabled", 00:18:00.581 "thread": "nvmf_tgt_poll_group_000", 00:18:00.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:00.581 "listen_address": { 00:18:00.581 "trtype": "TCP", 00:18:00.581 "adrfam": "IPv4", 00:18:00.581 "traddr": "10.0.0.2", 00:18:00.581 "trsvcid": "4420" 00:18:00.581 }, 00:18:00.581 "peer_address": { 00:18:00.581 "trtype": "TCP", 00:18:00.581 "adrfam": "IPv4", 00:18:00.581 "traddr": "10.0.0.1", 00:18:00.581 "trsvcid": "49486" 00:18:00.581 }, 00:18:00.581 "auth": { 00:18:00.581 "state": "completed", 00:18:00.581 "digest": "sha256", 00:18:00.581 "dhgroup": "ffdhe2048" 00:18:00.581 } 00:18:00.581 } 00:18:00.581 ]' 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.581 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.842 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:00.843 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.787 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.047 00:18:02.047 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.047 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.047 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.310 { 00:18:02.310 "cntlid": 17, 00:18:02.310 "qid": 0, 00:18:02.310 "state": "enabled", 00:18:02.310 "thread": "nvmf_tgt_poll_group_000", 00:18:02.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:02.310 "listen_address": { 00:18:02.310 "trtype": "TCP", 00:18:02.310 "adrfam": "IPv4", 00:18:02.310 "traddr": "10.0.0.2", 00:18:02.310 "trsvcid": "4420" 00:18:02.310 }, 00:18:02.310 "peer_address": { 00:18:02.310 "trtype": "TCP", 00:18:02.310 "adrfam": "IPv4", 00:18:02.310 "traddr": "10.0.0.1", 00:18:02.310 "trsvcid": "49508" 00:18:02.310 }, 00:18:02.310 "auth": { 00:18:02.310 "state": "completed", 00:18:02.310 "digest": "sha256", 00:18:02.310 "dhgroup": "ffdhe3072" 00:18:02.310 } 00:18:02.310 } 00:18:02.310 ]' 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.310 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.571 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:02.571 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:03.149 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.410 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.672 00:18:03.672 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.672 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.672 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.933 { 00:18:03.933 "cntlid": 19, 00:18:03.933 "qid": 0, 00:18:03.933 "state": "enabled", 00:18:03.933 "thread": "nvmf_tgt_poll_group_000", 00:18:03.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:03.933 "listen_address": { 00:18:03.933 "trtype": "TCP", 00:18:03.933 "adrfam": "IPv4", 00:18:03.933 "traddr": "10.0.0.2", 00:18:03.933 "trsvcid": "4420" 00:18:03.933 }, 00:18:03.933 "peer_address": { 00:18:03.933 "trtype": "TCP", 00:18:03.933 "adrfam": "IPv4", 00:18:03.933 "traddr": "10.0.0.1", 00:18:03.933 "trsvcid": "34830" 00:18:03.933 }, 00:18:03.933 "auth": { 00:18:03.933 "state": "completed", 00:18:03.933 "digest": "sha256", 00:18:03.933 "dhgroup": "ffdhe3072" 00:18:03.933 } 00:18:03.933 } 00:18:03.933 ]' 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.933 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.194 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.194 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.194 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.194 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:04.194 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.137 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.399 00:18:05.399 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.399 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.399 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.660 { 00:18:05.660 "cntlid": 21, 00:18:05.660 "qid": 0, 00:18:05.660 "state": "enabled", 00:18:05.660 "thread": "nvmf_tgt_poll_group_000", 00:18:05.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:05.660 "listen_address": { 00:18:05.660 "trtype": "TCP", 00:18:05.660 "adrfam": "IPv4", 00:18:05.660 "traddr": "10.0.0.2", 00:18:05.660 "trsvcid": "4420" 00:18:05.660 }, 00:18:05.660 "peer_address": { 00:18:05.660 "trtype": "TCP", 00:18:05.660 "adrfam": "IPv4", 00:18:05.660 "traddr": "10.0.0.1", 00:18:05.660 "trsvcid": "34846" 00:18:05.660 }, 00:18:05.660 "auth": { 00:18:05.660 "state": "completed", 00:18:05.660 "digest": "sha256", 00:18:05.660 "dhgroup": "ffdhe3072" 00:18:05.660 } 00:18:05.660 } 00:18:05.660 ]' 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.660 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.921 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.921 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.922 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.922 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:05.922 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:06.865 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.865 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.126 00:18:07.126 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.126 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.126 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.387 { 00:18:07.387 "cntlid": 23, 00:18:07.387 "qid": 0, 00:18:07.387 "state": "enabled", 00:18:07.387 "thread": "nvmf_tgt_poll_group_000", 00:18:07.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:07.387 "listen_address": { 00:18:07.387 "trtype": "TCP", 00:18:07.387 "adrfam": "IPv4", 00:18:07.387 "traddr": "10.0.0.2", 00:18:07.387 "trsvcid": "4420" 00:18:07.387 }, 00:18:07.387 "peer_address": { 00:18:07.387 "trtype": "TCP", 00:18:07.387 "adrfam": "IPv4", 00:18:07.387 "traddr": "10.0.0.1", 00:18:07.387 "trsvcid": "34884" 00:18:07.387 }, 00:18:07.387 "auth": { 00:18:07.387 "state": "completed", 00:18:07.387 "digest": "sha256", 00:18:07.387 "dhgroup": "ffdhe3072" 00:18:07.387 } 00:18:07.387 } 00:18:07.387 ]' 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.387 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.647 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.647 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.647 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.647 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:07.647 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.589 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.850 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.111 { 00:18:09.111 "cntlid": 25, 00:18:09.111 "qid": 0, 00:18:09.111 "state": "enabled", 00:18:09.111 "thread": "nvmf_tgt_poll_group_000", 00:18:09.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:09.111 "listen_address": { 00:18:09.111 "trtype": "TCP", 00:18:09.111 "adrfam": "IPv4", 00:18:09.111 "traddr": "10.0.0.2", 00:18:09.111 "trsvcid": "4420" 00:18:09.111 }, 00:18:09.111 "peer_address": { 00:18:09.111 "trtype": "TCP", 00:18:09.111 "adrfam": "IPv4", 00:18:09.111 "traddr": "10.0.0.1", 00:18:09.111 "trsvcid": "34902" 00:18:09.111 }, 00:18:09.111 "auth": { 00:18:09.111 "state": "completed", 00:18:09.111 "digest": "sha256", 00:18:09.111 "dhgroup": "ffdhe4096" 00:18:09.111 } 00:18:09.111 } 00:18:09.111 ]' 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.111 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.373 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.373 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.373 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.373 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.373 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.635 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:09.635 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:10.207 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.207 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.207 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.207 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.207 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.207 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.207 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:10.207 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:10.468 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:10.468 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.468 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.468 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.468 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.468 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.469 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.469 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.469 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.469 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.469 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.469 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.469 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.729 00:18:10.729 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.729 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.729 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.990 { 00:18:10.990 "cntlid": 27, 00:18:10.990 "qid": 0, 00:18:10.990 "state": "enabled", 00:18:10.990 "thread": "nvmf_tgt_poll_group_000", 00:18:10.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:10.990 "listen_address": { 00:18:10.990 "trtype": "TCP", 00:18:10.990 "adrfam": "IPv4", 00:18:10.990 "traddr": "10.0.0.2", 00:18:10.990 "trsvcid": "4420" 00:18:10.990 }, 00:18:10.990 "peer_address": { 00:18:10.990 "trtype": "TCP", 00:18:10.990 "adrfam": "IPv4", 00:18:10.990 "traddr": "10.0.0.1", 00:18:10.990 "trsvcid": "34930" 00:18:10.990 }, 00:18:10.990 "auth": { 00:18:10.990 "state": "completed", 00:18:10.990 "digest": "sha256", 00:18:10.990 "dhgroup": "ffdhe4096" 00:18:10.990 } 00:18:10.990 } 00:18:10.990 ]' 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.990 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.254 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:11.254 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.197 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.459 00:18:12.459 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.459 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.459 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.720 { 00:18:12.720 "cntlid": 29, 00:18:12.720 "qid": 0, 00:18:12.720 "state": "enabled", 00:18:12.720 "thread": "nvmf_tgt_poll_group_000", 00:18:12.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:12.720 "listen_address": { 00:18:12.720 "trtype": "TCP", 00:18:12.720 "adrfam": "IPv4", 00:18:12.720 "traddr": "10.0.0.2", 00:18:12.720 "trsvcid": "4420" 00:18:12.720 }, 00:18:12.720 "peer_address": { 00:18:12.720 "trtype": "TCP", 00:18:12.720 "adrfam": "IPv4", 00:18:12.720 "traddr": "10.0.0.1", 00:18:12.720 "trsvcid": "34968" 00:18:12.720 }, 00:18:12.720 "auth": { 00:18:12.720 "state": "completed", 00:18:12.720 "digest": "sha256", 00:18:12.720 "dhgroup": "ffdhe4096" 00:18:12.720 } 00:18:12.720 } 00:18:12.720 ]' 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.720 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.720 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.721 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.983 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.983 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.983 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.983 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:12.983 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.921 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.922 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.922 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:13.922 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.922 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.181 00:18:14.181 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.181 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.181 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.441 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.442 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.442 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.442 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.442 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.442 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.442 { 00:18:14.442 "cntlid": 31, 00:18:14.442 "qid": 0, 00:18:14.442 "state": "enabled", 00:18:14.442 "thread": "nvmf_tgt_poll_group_000", 00:18:14.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:14.442 "listen_address": { 00:18:14.442 "trtype": "TCP", 00:18:14.442 "adrfam": "IPv4", 00:18:14.442 "traddr": "10.0.0.2", 00:18:14.442 "trsvcid": "4420" 00:18:14.442 }, 00:18:14.442 "peer_address": { 00:18:14.442 "trtype": "TCP", 00:18:14.442 "adrfam": "IPv4", 00:18:14.442 "traddr": "10.0.0.1", 00:18:14.442 "trsvcid": "41184" 00:18:14.442 }, 00:18:14.442 "auth": { 00:18:14.442 "state": "completed", 00:18:14.442 "digest": "sha256", 00:18:14.442 "dhgroup": "ffdhe4096" 00:18:14.442 } 00:18:14.442 } 00:18:14.442 ]' 00:18:14.442 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.442 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.442 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.702 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.702 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.703 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.703 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.703 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.703 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:14.703 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.642 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.210 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.210 { 00:18:16.210 "cntlid": 33, 00:18:16.210 "qid": 0, 00:18:16.210 "state": "enabled", 00:18:16.210 "thread": "nvmf_tgt_poll_group_000", 00:18:16.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:16.210 "listen_address": { 00:18:16.210 "trtype": "TCP", 00:18:16.210 "adrfam": "IPv4", 00:18:16.210 "traddr": "10.0.0.2", 00:18:16.210 "trsvcid": "4420" 00:18:16.210 }, 00:18:16.210 "peer_address": { 00:18:16.210 "trtype": "TCP", 00:18:16.210 "adrfam": "IPv4", 00:18:16.210 "traddr": "10.0.0.1", 00:18:16.210 "trsvcid": "41200" 00:18:16.210 }, 00:18:16.210 "auth": { 00:18:16.210 "state": "completed", 00:18:16.210 "digest": "sha256", 00:18:16.210 "dhgroup": "ffdhe6144" 00:18:16.210 } 00:18:16.210 } 00:18:16.210 ]' 00:18:16.210 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.470 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.470 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.470 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.470 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.470 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.470 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.470 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.730 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:16.730 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:17.299 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.299 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.299 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.299 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.299 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.300 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.300 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.300 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.560 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.820 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.080 { 00:18:18.080 "cntlid": 35, 00:18:18.080 "qid": 0, 00:18:18.080 "state": "enabled", 00:18:18.080 "thread": "nvmf_tgt_poll_group_000", 00:18:18.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:18.080 "listen_address": { 00:18:18.080 "trtype": "TCP", 00:18:18.080 "adrfam": "IPv4", 00:18:18.080 "traddr": "10.0.0.2", 00:18:18.080 "trsvcid": "4420" 00:18:18.080 }, 00:18:18.080 "peer_address": { 00:18:18.080 "trtype": "TCP", 00:18:18.080 "adrfam": "IPv4", 00:18:18.080 "traddr": "10.0.0.1", 00:18:18.080 "trsvcid": "41228" 00:18:18.080 }, 00:18:18.080 "auth": { 00:18:18.080 "state": "completed", 00:18:18.080 "digest": "sha256", 00:18:18.080 "dhgroup": "ffdhe6144" 00:18:18.080 } 00:18:18.080 } 00:18:18.080 ]' 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.080 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.341 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.341 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.341 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.341 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.341 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.603 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:18.603 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:19.175 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.175 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.175 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.175 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.175 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.175 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.175 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.175 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.436 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.696 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.956 { 00:18:19.956 "cntlid": 37, 00:18:19.956 "qid": 0, 00:18:19.956 "state": "enabled", 00:18:19.956 "thread": "nvmf_tgt_poll_group_000", 00:18:19.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:19.956 "listen_address": { 00:18:19.956 "trtype": "TCP", 00:18:19.956 "adrfam": "IPv4", 00:18:19.956 "traddr": "10.0.0.2", 00:18:19.956 "trsvcid": "4420" 00:18:19.956 }, 00:18:19.956 "peer_address": { 00:18:19.956 "trtype": "TCP", 00:18:19.956 "adrfam": "IPv4", 00:18:19.956 "traddr": "10.0.0.1", 00:18:19.956 "trsvcid": "41256" 00:18:19.956 }, 00:18:19.956 "auth": { 00:18:19.956 "state": "completed", 00:18:19.956 "digest": "sha256", 00:18:19.956 "dhgroup": "ffdhe6144" 00:18:19.956 } 00:18:19.956 } 00:18:19.956 ]' 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.956 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.216 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.216 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.216 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.216 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.216 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.216 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:20.216 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:21.157 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.157 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.157 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.157 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.157 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.157 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.157 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.157 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.417 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.676 00:18:21.676 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.676 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.676 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.936 { 00:18:21.936 "cntlid": 39, 00:18:21.936 "qid": 0, 00:18:21.936 "state": "enabled", 00:18:21.936 "thread": "nvmf_tgt_poll_group_000", 00:18:21.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:21.936 "listen_address": { 00:18:21.936 "trtype": "TCP", 00:18:21.936 "adrfam": "IPv4", 00:18:21.936 "traddr": "10.0.0.2", 00:18:21.936 "trsvcid": "4420" 00:18:21.936 }, 00:18:21.936 "peer_address": { 00:18:21.936 "trtype": "TCP", 00:18:21.936 "adrfam": "IPv4", 00:18:21.936 "traddr": "10.0.0.1", 00:18:21.936 "trsvcid": "41278" 00:18:21.936 }, 00:18:21.936 "auth": { 00:18:21.936 "state": "completed", 00:18:21.936 "digest": "sha256", 00:18:21.936 "dhgroup": "ffdhe6144" 00:18:21.936 } 00:18:21.936 } 00:18:21.936 ]' 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.936 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.937 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.937 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.937 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.196 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:22.196 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.139 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.709 00:18:23.709 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.709 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.709 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.968 { 00:18:23.968 "cntlid": 41, 00:18:23.968 "qid": 0, 00:18:23.968 "state": "enabled", 00:18:23.968 "thread": "nvmf_tgt_poll_group_000", 00:18:23.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:23.968 "listen_address": { 00:18:23.968 "trtype": "TCP", 00:18:23.968 "adrfam": "IPv4", 00:18:23.968 "traddr": "10.0.0.2", 00:18:23.968 "trsvcid": "4420" 00:18:23.968 }, 00:18:23.968 "peer_address": { 00:18:23.968 "trtype": "TCP", 00:18:23.968 "adrfam": "IPv4", 00:18:23.968 "traddr": "10.0.0.1", 00:18:23.968 "trsvcid": "41306" 00:18:23.968 }, 00:18:23.968 "auth": { 00:18:23.968 "state": "completed", 00:18:23.968 "digest": "sha256", 00:18:23.968 "dhgroup": "ffdhe8192" 00:18:23.968 } 00:18:23.968 } 00:18:23.968 ]' 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.968 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.227 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:24.227 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.171 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.740 00:18:25.740 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.740 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.740 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.999 { 00:18:25.999 "cntlid": 43, 00:18:25.999 "qid": 0, 00:18:25.999 "state": "enabled", 00:18:25.999 "thread": "nvmf_tgt_poll_group_000", 00:18:25.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:25.999 "listen_address": { 00:18:25.999 "trtype": "TCP", 00:18:25.999 "adrfam": "IPv4", 00:18:25.999 "traddr": "10.0.0.2", 00:18:25.999 "trsvcid": "4420" 00:18:25.999 }, 00:18:25.999 "peer_address": { 00:18:25.999 "trtype": "TCP", 00:18:25.999 "adrfam": "IPv4", 00:18:25.999 "traddr": "10.0.0.1", 00:18:25.999 "trsvcid": "47618" 00:18:25.999 }, 00:18:25.999 "auth": { 00:18:25.999 "state": "completed", 00:18:25.999 "digest": "sha256", 00:18:25.999 "dhgroup": "ffdhe8192" 00:18:25.999 } 00:18:25.999 } 00:18:25.999 ]' 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.999 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.258 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:26.259 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:26.830 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.091 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.661 00:18:27.661 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.661 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.662 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.922 { 00:18:27.922 "cntlid": 45, 00:18:27.922 "qid": 0, 00:18:27.922 "state": "enabled", 00:18:27.922 "thread": "nvmf_tgt_poll_group_000", 00:18:27.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:27.922 "listen_address": { 00:18:27.922 "trtype": "TCP", 00:18:27.922 "adrfam": "IPv4", 00:18:27.922 "traddr": "10.0.0.2", 00:18:27.922 "trsvcid": "4420" 00:18:27.922 }, 00:18:27.922 "peer_address": { 00:18:27.922 "trtype": "TCP", 00:18:27.922 "adrfam": "IPv4", 00:18:27.922 "traddr": "10.0.0.1", 00:18:27.922 "trsvcid": "47646" 00:18:27.922 }, 00:18:27.922 "auth": { 00:18:27.922 "state": "completed", 00:18:27.922 "digest": "sha256", 00:18:27.922 "dhgroup": "ffdhe8192" 00:18:27.922 } 00:18:27.922 } 00:18:27.922 ]' 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.922 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.183 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:28.183 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:29.123 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.123 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.123 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.123 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.123 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.123 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.123 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.123 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.384 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.955 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.955 { 00:18:29.955 "cntlid": 47, 00:18:29.955 "qid": 0, 00:18:29.955 "state": "enabled", 00:18:29.955 "thread": "nvmf_tgt_poll_group_000", 00:18:29.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:29.955 "listen_address": { 00:18:29.955 "trtype": "TCP", 00:18:29.955 "adrfam": "IPv4", 00:18:29.955 "traddr": "10.0.0.2", 00:18:29.955 "trsvcid": "4420" 00:18:29.955 }, 00:18:29.955 "peer_address": { 00:18:29.955 "trtype": "TCP", 00:18:29.955 "adrfam": "IPv4", 00:18:29.955 "traddr": "10.0.0.1", 00:18:29.955 "trsvcid": "47668" 00:18:29.955 }, 00:18:29.955 "auth": { 00:18:29.955 "state": "completed", 00:18:29.955 "digest": "sha256", 00:18:29.955 "dhgroup": "ffdhe8192" 00:18:29.955 } 00:18:29.955 } 00:18:29.955 ]' 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.955 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.215 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.215 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.215 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.215 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.215 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.215 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:30.215 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:31.156 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.156 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.156 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.156 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.156 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.156 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:31.157 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.157 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.157 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.157 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.417 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.417 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.677 { 00:18:31.677 "cntlid": 49, 00:18:31.677 "qid": 0, 00:18:31.677 "state": "enabled", 00:18:31.677 "thread": "nvmf_tgt_poll_group_000", 00:18:31.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:31.677 "listen_address": { 00:18:31.677 "trtype": "TCP", 00:18:31.677 "adrfam": "IPv4", 00:18:31.677 "traddr": "10.0.0.2", 00:18:31.677 "trsvcid": "4420" 00:18:31.677 }, 00:18:31.677 "peer_address": { 00:18:31.677 "trtype": "TCP", 00:18:31.677 "adrfam": "IPv4", 00:18:31.677 "traddr": "10.0.0.1", 00:18:31.677 "trsvcid": "47692" 00:18:31.677 }, 00:18:31.677 "auth": { 00:18:31.677 "state": "completed", 00:18:31.677 "digest": "sha384", 00:18:31.677 "dhgroup": "null" 00:18:31.677 } 00:18:31.677 } 00:18:31.677 ]' 00:18:31.677 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.938 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.938 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.938 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:31.938 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.938 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.938 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.938 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.197 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:32.198 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:32.767 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.767 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.767 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.767 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.767 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.767 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.767 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:32.768 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.028 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.029 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.029 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.029 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.029 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.289 00:18:33.289 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.289 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.289 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.550 { 00:18:33.550 "cntlid": 51, 00:18:33.550 "qid": 0, 00:18:33.550 "state": "enabled", 00:18:33.550 "thread": "nvmf_tgt_poll_group_000", 00:18:33.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:33.550 "listen_address": { 00:18:33.550 "trtype": "TCP", 00:18:33.550 "adrfam": "IPv4", 00:18:33.550 "traddr": "10.0.0.2", 00:18:33.550 "trsvcid": "4420" 00:18:33.550 }, 00:18:33.550 "peer_address": { 00:18:33.550 "trtype": "TCP", 00:18:33.550 "adrfam": "IPv4", 00:18:33.550 "traddr": "10.0.0.1", 00:18:33.550 "trsvcid": "47710" 00:18:33.550 }, 00:18:33.550 "auth": { 00:18:33.550 "state": "completed", 00:18:33.550 "digest": "sha384", 00:18:33.550 "dhgroup": "null" 00:18:33.550 } 00:18:33.550 } 00:18:33.550 ]' 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.550 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.810 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:33.810 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.751 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.036 00:18:35.036 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.036 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.036 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.315 { 00:18:35.315 "cntlid": 53, 00:18:35.315 "qid": 0, 00:18:35.315 "state": "enabled", 00:18:35.315 "thread": "nvmf_tgt_poll_group_000", 00:18:35.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:35.315 "listen_address": { 00:18:35.315 "trtype": "TCP", 00:18:35.315 "adrfam": "IPv4", 00:18:35.315 "traddr": "10.0.0.2", 00:18:35.315 "trsvcid": "4420" 00:18:35.315 }, 00:18:35.315 "peer_address": { 00:18:35.315 "trtype": "TCP", 00:18:35.315 "adrfam": "IPv4", 00:18:35.315 "traddr": "10.0.0.1", 00:18:35.315 "trsvcid": "53622" 00:18:35.315 }, 00:18:35.315 "auth": { 00:18:35.315 "state": "completed", 00:18:35.315 "digest": "sha384", 00:18:35.315 "dhgroup": "null" 00:18:35.315 } 00:18:35.315 } 00:18:35.315 ]' 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.315 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.584 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:35.584 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:36.165 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.165 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.165 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.165 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.165 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.165 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.165 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:36.165 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.425 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.686 00:18:36.686 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.686 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.686 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.947 { 00:18:36.947 "cntlid": 55, 00:18:36.947 "qid": 0, 00:18:36.947 "state": "enabled", 00:18:36.947 "thread": "nvmf_tgt_poll_group_000", 00:18:36.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:36.947 "listen_address": { 00:18:36.947 "trtype": "TCP", 00:18:36.947 "adrfam": "IPv4", 00:18:36.947 "traddr": "10.0.0.2", 00:18:36.947 "trsvcid": "4420" 00:18:36.947 }, 00:18:36.947 "peer_address": { 00:18:36.947 "trtype": "TCP", 00:18:36.947 "adrfam": "IPv4", 00:18:36.947 "traddr": "10.0.0.1", 00:18:36.947 "trsvcid": "53648" 00:18:36.947 }, 00:18:36.947 "auth": { 00:18:36.947 "state": "completed", 00:18:36.947 "digest": "sha384", 00:18:36.947 "dhgroup": "null" 00:18:36.947 } 00:18:36.947 } 00:18:36.947 ]' 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.947 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.208 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:37.208 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:37.778 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.039 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.299 00:18:38.299 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.299 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.299 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.559 { 00:18:38.559 "cntlid": 57, 00:18:38.559 "qid": 0, 00:18:38.559 "state": "enabled", 00:18:38.559 "thread": "nvmf_tgt_poll_group_000", 00:18:38.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:38.559 "listen_address": { 00:18:38.559 "trtype": "TCP", 00:18:38.559 "adrfam": "IPv4", 00:18:38.559 "traddr": "10.0.0.2", 00:18:38.559 "trsvcid": "4420" 00:18:38.559 }, 00:18:38.559 "peer_address": { 00:18:38.559 "trtype": "TCP", 00:18:38.559 "adrfam": "IPv4", 00:18:38.559 "traddr": "10.0.0.1", 00:18:38.559 "trsvcid": "53682" 00:18:38.559 }, 00:18:38.559 "auth": { 00:18:38.559 "state": "completed", 00:18:38.559 "digest": "sha384", 00:18:38.559 "dhgroup": "ffdhe2048" 00:18:38.559 } 00:18:38.559 } 00:18:38.559 ]' 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.559 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.819 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:38.819 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:39.759 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.759 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:39.759 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.760 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.760 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.760 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.760 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.760 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.760 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.020 00:18:40.020 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.020 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.020 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.280 { 00:18:40.280 "cntlid": 59, 00:18:40.280 "qid": 0, 00:18:40.280 "state": "enabled", 00:18:40.280 "thread": "nvmf_tgt_poll_group_000", 00:18:40.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:40.280 "listen_address": { 00:18:40.280 "trtype": "TCP", 00:18:40.280 "adrfam": "IPv4", 00:18:40.280 "traddr": "10.0.0.2", 00:18:40.280 "trsvcid": "4420" 00:18:40.280 }, 00:18:40.280 "peer_address": { 00:18:40.280 "trtype": "TCP", 00:18:40.280 "adrfam": "IPv4", 00:18:40.280 "traddr": "10.0.0.1", 00:18:40.280 "trsvcid": "53710" 00:18:40.280 }, 00:18:40.280 "auth": { 00:18:40.280 "state": "completed", 00:18:40.280 "digest": "sha384", 00:18:40.280 "dhgroup": "ffdhe2048" 00:18:40.280 } 00:18:40.280 } 00:18:40.280 ]' 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.280 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.540 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:40.540 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.478 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.738 00:18:41.738 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.738 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.738 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.998 { 00:18:41.998 "cntlid": 61, 00:18:41.998 "qid": 0, 00:18:41.998 "state": "enabled", 00:18:41.998 "thread": "nvmf_tgt_poll_group_000", 00:18:41.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:41.998 "listen_address": { 00:18:41.998 "trtype": "TCP", 00:18:41.998 "adrfam": "IPv4", 00:18:41.998 "traddr": "10.0.0.2", 00:18:41.998 "trsvcid": "4420" 00:18:41.998 }, 00:18:41.998 "peer_address": { 00:18:41.998 "trtype": "TCP", 00:18:41.998 "adrfam": "IPv4", 00:18:41.998 "traddr": "10.0.0.1", 00:18:41.998 "trsvcid": "53748" 00:18:41.998 }, 00:18:41.998 "auth": { 00:18:41.998 "state": "completed", 00:18:41.998 "digest": "sha384", 00:18:41.998 "dhgroup": "ffdhe2048" 00:18:41.998 } 00:18:41.998 } 00:18:41.998 ]' 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.998 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.258 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:42.258 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.198 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.458 00:18:43.458 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.458 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.459 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.720 { 00:18:43.720 "cntlid": 63, 00:18:43.720 "qid": 0, 00:18:43.720 "state": "enabled", 00:18:43.720 "thread": "nvmf_tgt_poll_group_000", 00:18:43.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:43.720 "listen_address": { 00:18:43.720 "trtype": "TCP", 00:18:43.720 "adrfam": "IPv4", 00:18:43.720 "traddr": "10.0.0.2", 00:18:43.720 "trsvcid": "4420" 00:18:43.720 }, 00:18:43.720 "peer_address": { 00:18:43.720 "trtype": "TCP", 00:18:43.720 "adrfam": "IPv4", 00:18:43.720 "traddr": "10.0.0.1", 00:18:43.720 "trsvcid": "53768" 00:18:43.720 }, 00:18:43.720 "auth": { 00:18:43.720 "state": "completed", 00:18:43.720 "digest": "sha384", 00:18:43.720 "dhgroup": "ffdhe2048" 00:18:43.720 } 00:18:43.720 } 00:18:43.720 ]' 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.720 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.721 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.721 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.721 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.982 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:43.982 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.925 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.925 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.186 00:18:45.186 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.186 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.186 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.447 { 00:18:45.447 "cntlid": 65, 00:18:45.447 "qid": 0, 00:18:45.447 "state": "enabled", 00:18:45.447 "thread": "nvmf_tgt_poll_group_000", 00:18:45.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:45.447 "listen_address": { 00:18:45.447 "trtype": "TCP", 00:18:45.447 "adrfam": "IPv4", 00:18:45.447 "traddr": "10.0.0.2", 00:18:45.447 "trsvcid": "4420" 00:18:45.447 }, 00:18:45.447 "peer_address": { 00:18:45.447 "trtype": "TCP", 00:18:45.447 "adrfam": "IPv4", 00:18:45.447 "traddr": "10.0.0.1", 00:18:45.447 "trsvcid": "33906" 00:18:45.447 }, 00:18:45.447 "auth": { 00:18:45.447 "state": "completed", 00:18:45.447 "digest": "sha384", 00:18:45.447 "dhgroup": "ffdhe3072" 00:18:45.447 } 00:18:45.447 } 00:18:45.447 ]' 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.447 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.708 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:45.708 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.648 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.649 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.908 00:18:46.908 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.908 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.908 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.168 { 00:18:47.168 "cntlid": 67, 00:18:47.168 "qid": 0, 00:18:47.168 "state": "enabled", 00:18:47.168 "thread": "nvmf_tgt_poll_group_000", 00:18:47.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:47.168 "listen_address": { 00:18:47.168 "trtype": "TCP", 00:18:47.168 "adrfam": "IPv4", 00:18:47.168 "traddr": "10.0.0.2", 00:18:47.168 "trsvcid": "4420" 00:18:47.168 }, 00:18:47.168 "peer_address": { 00:18:47.168 "trtype": "TCP", 00:18:47.168 "adrfam": "IPv4", 00:18:47.168 "traddr": "10.0.0.1", 00:18:47.168 "trsvcid": "33934" 00:18:47.168 }, 00:18:47.168 "auth": { 00:18:47.168 "state": "completed", 00:18:47.168 "digest": "sha384", 00:18:47.168 "dhgroup": "ffdhe3072" 00:18:47.168 } 00:18:47.168 } 00:18:47.168 ]' 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.168 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.427 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:47.427 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.367 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.627 00:18:48.627 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.628 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.628 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.888 { 00:18:48.888 "cntlid": 69, 00:18:48.888 "qid": 0, 00:18:48.888 "state": "enabled", 00:18:48.888 "thread": "nvmf_tgt_poll_group_000", 00:18:48.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:48.888 "listen_address": { 00:18:48.888 "trtype": "TCP", 00:18:48.888 "adrfam": "IPv4", 00:18:48.888 "traddr": "10.0.0.2", 00:18:48.888 "trsvcid": "4420" 00:18:48.888 }, 00:18:48.888 "peer_address": { 00:18:48.888 "trtype": "TCP", 00:18:48.888 "adrfam": "IPv4", 00:18:48.888 "traddr": "10.0.0.1", 00:18:48.888 "trsvcid": "33962" 00:18:48.888 }, 00:18:48.888 "auth": { 00:18:48.888 "state": "completed", 00:18:48.888 "digest": "sha384", 00:18:48.888 "dhgroup": "ffdhe3072" 00:18:48.888 } 00:18:48.888 } 00:18:48.888 ]' 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.888 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.149 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:49.149 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.091 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.351 00:18:50.351 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.351 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.351 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.611 { 00:18:50.611 "cntlid": 71, 00:18:50.611 "qid": 0, 00:18:50.611 "state": "enabled", 00:18:50.611 "thread": "nvmf_tgt_poll_group_000", 00:18:50.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:50.611 "listen_address": { 00:18:50.611 "trtype": "TCP", 00:18:50.611 "adrfam": "IPv4", 00:18:50.611 "traddr": "10.0.0.2", 00:18:50.611 "trsvcid": "4420" 00:18:50.611 }, 00:18:50.611 "peer_address": { 00:18:50.611 "trtype": "TCP", 00:18:50.611 "adrfam": "IPv4", 00:18:50.611 "traddr": "10.0.0.1", 00:18:50.611 "trsvcid": "33980" 00:18:50.611 }, 00:18:50.611 "auth": { 00:18:50.611 "state": "completed", 00:18:50.611 "digest": "sha384", 00:18:50.611 "dhgroup": "ffdhe3072" 00:18:50.611 } 00:18:50.611 } 00:18:50.611 ]' 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.611 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.612 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.612 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.612 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.612 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.872 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:50.872 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:51.441 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.702 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.702 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.702 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.702 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.702 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.702 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.702 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.702 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.702 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.962 00:18:51.962 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.962 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.962 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.224 { 00:18:52.224 "cntlid": 73, 00:18:52.224 "qid": 0, 00:18:52.224 "state": "enabled", 00:18:52.224 "thread": "nvmf_tgt_poll_group_000", 00:18:52.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:52.224 "listen_address": { 00:18:52.224 "trtype": "TCP", 00:18:52.224 "adrfam": "IPv4", 00:18:52.224 "traddr": "10.0.0.2", 00:18:52.224 "trsvcid": "4420" 00:18:52.224 }, 00:18:52.224 "peer_address": { 00:18:52.224 "trtype": "TCP", 00:18:52.224 "adrfam": "IPv4", 00:18:52.224 "traddr": "10.0.0.1", 00:18:52.224 "trsvcid": "34018" 00:18:52.224 }, 00:18:52.224 "auth": { 00:18:52.224 "state": "completed", 00:18:52.224 "digest": "sha384", 00:18:52.224 "dhgroup": "ffdhe4096" 00:18:52.224 } 00:18:52.224 } 00:18:52.224 ]' 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.224 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.486 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.486 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.486 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.486 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.486 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.486 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:52.486 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.426 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.427 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.687 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.947 { 00:18:53.947 "cntlid": 75, 00:18:53.947 "qid": 0, 00:18:53.947 "state": "enabled", 00:18:53.947 "thread": "nvmf_tgt_poll_group_000", 00:18:53.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:53.947 "listen_address": { 00:18:53.947 "trtype": "TCP", 00:18:53.947 "adrfam": "IPv4", 00:18:53.947 "traddr": "10.0.0.2", 00:18:53.947 "trsvcid": "4420" 00:18:53.947 }, 00:18:53.947 "peer_address": { 00:18:53.947 "trtype": "TCP", 00:18:53.947 "adrfam": "IPv4", 00:18:53.947 "traddr": "10.0.0.1", 00:18:53.947 "trsvcid": "45822" 00:18:53.947 }, 00:18:53.947 "auth": { 00:18:53.947 "state": "completed", 00:18:53.947 "digest": "sha384", 00:18:53.947 "dhgroup": "ffdhe4096" 00:18:53.947 } 00:18:53.947 } 00:18:53.947 ]' 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.947 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.207 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.207 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.207 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.207 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.207 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.468 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:54.468 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:18:55.038 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.038 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.038 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.038 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.038 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.038 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.038 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:55.038 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.298 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.558 00:18:55.558 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.558 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.558 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.818 { 00:18:55.818 "cntlid": 77, 00:18:55.818 "qid": 0, 00:18:55.818 "state": "enabled", 00:18:55.818 "thread": "nvmf_tgt_poll_group_000", 00:18:55.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:55.818 "listen_address": { 00:18:55.818 "trtype": "TCP", 00:18:55.818 "adrfam": "IPv4", 00:18:55.818 "traddr": "10.0.0.2", 00:18:55.818 "trsvcid": "4420" 00:18:55.818 }, 00:18:55.818 "peer_address": { 00:18:55.818 "trtype": "TCP", 00:18:55.818 "adrfam": "IPv4", 00:18:55.818 "traddr": "10.0.0.1", 00:18:55.818 "trsvcid": "45852" 00:18:55.818 }, 00:18:55.818 "auth": { 00:18:55.818 "state": "completed", 00:18:55.818 "digest": "sha384", 00:18:55.818 "dhgroup": "ffdhe4096" 00:18:55.818 } 00:18:55.818 } 00:18:55.818 ]' 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.818 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.078 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.078 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.078 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.078 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:56.078 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.018 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.278 00:18:57.278 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.278 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.278 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.539 { 00:18:57.539 "cntlid": 79, 00:18:57.539 "qid": 0, 00:18:57.539 "state": "enabled", 00:18:57.539 "thread": "nvmf_tgt_poll_group_000", 00:18:57.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:57.539 "listen_address": { 00:18:57.539 "trtype": "TCP", 00:18:57.539 "adrfam": "IPv4", 00:18:57.539 "traddr": "10.0.0.2", 00:18:57.539 "trsvcid": "4420" 00:18:57.539 }, 00:18:57.539 "peer_address": { 00:18:57.539 "trtype": "TCP", 00:18:57.539 "adrfam": "IPv4", 00:18:57.539 "traddr": "10.0.0.1", 00:18:57.539 "trsvcid": "45876" 00:18:57.539 }, 00:18:57.539 "auth": { 00:18:57.539 "state": "completed", 00:18:57.539 "digest": "sha384", 00:18:57.539 "dhgroup": "ffdhe4096" 00:18:57.539 } 00:18:57.539 } 00:18:57.539 ]' 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.539 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.800 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.800 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.800 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.800 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:57.800 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:18:58.371 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.632 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.203 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.203 { 00:18:59.203 "cntlid": 81, 00:18:59.203 "qid": 0, 00:18:59.203 "state": "enabled", 00:18:59.203 "thread": "nvmf_tgt_poll_group_000", 00:18:59.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:59.203 "listen_address": { 00:18:59.203 "trtype": "TCP", 00:18:59.203 "adrfam": "IPv4", 00:18:59.203 "traddr": "10.0.0.2", 00:18:59.203 "trsvcid": "4420" 00:18:59.203 }, 00:18:59.203 "peer_address": { 00:18:59.203 "trtype": "TCP", 00:18:59.203 "adrfam": "IPv4", 00:18:59.203 "traddr": "10.0.0.1", 00:18:59.203 "trsvcid": "45906" 00:18:59.203 }, 00:18:59.203 "auth": { 00:18:59.203 "state": "completed", 00:18:59.203 "digest": "sha384", 00:18:59.203 "dhgroup": "ffdhe6144" 00:18:59.203 } 00:18:59.203 } 00:18:59.203 ]' 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.203 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.464 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.465 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.465 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.465 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.465 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.465 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:18:59.465 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:00.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.667 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.936 00:19:00.936 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.936 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.936 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.199 { 00:19:01.199 "cntlid": 83, 00:19:01.199 "qid": 0, 00:19:01.199 "state": "enabled", 00:19:01.199 "thread": "nvmf_tgt_poll_group_000", 00:19:01.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:01.199 "listen_address": { 00:19:01.199 "trtype": "TCP", 00:19:01.199 "adrfam": "IPv4", 00:19:01.199 "traddr": "10.0.0.2", 00:19:01.199 "trsvcid": "4420" 00:19:01.199 }, 00:19:01.199 "peer_address": { 00:19:01.199 "trtype": "TCP", 00:19:01.199 "adrfam": "IPv4", 00:19:01.199 "traddr": "10.0.0.1", 00:19:01.199 "trsvcid": "45942" 00:19:01.199 }, 00:19:01.199 "auth": { 00:19:01.199 "state": "completed", 00:19:01.199 "digest": "sha384", 00:19:01.199 "dhgroup": "ffdhe6144" 00:19:01.199 } 00:19:01.199 } 00:19:01.199 ]' 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.199 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.459 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:01.459 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.400 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.660 00:19:02.660 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.660 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.660 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.920 { 00:19:02.920 "cntlid": 85, 00:19:02.920 "qid": 0, 00:19:02.920 "state": "enabled", 00:19:02.920 "thread": "nvmf_tgt_poll_group_000", 00:19:02.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:02.920 "listen_address": { 00:19:02.920 "trtype": "TCP", 00:19:02.920 "adrfam": "IPv4", 00:19:02.920 "traddr": "10.0.0.2", 00:19:02.920 "trsvcid": "4420" 00:19:02.920 }, 00:19:02.920 "peer_address": { 00:19:02.920 "trtype": "TCP", 00:19:02.920 "adrfam": "IPv4", 00:19:02.920 "traddr": "10.0.0.1", 00:19:02.920 "trsvcid": "45970" 00:19:02.920 }, 00:19:02.920 "auth": { 00:19:02.920 "state": "completed", 00:19:02.920 "digest": "sha384", 00:19:02.920 "dhgroup": "ffdhe6144" 00:19:02.920 } 00:19:02.920 } 00:19:02.920 ]' 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.920 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.181 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.181 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.181 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.181 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:03.181 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.124 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.695 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.695 { 00:19:04.695 "cntlid": 87, 00:19:04.695 "qid": 0, 00:19:04.695 "state": "enabled", 00:19:04.695 "thread": "nvmf_tgt_poll_group_000", 00:19:04.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:04.695 "listen_address": { 00:19:04.695 "trtype": "TCP", 00:19:04.695 "adrfam": "IPv4", 00:19:04.695 "traddr": "10.0.0.2", 00:19:04.695 "trsvcid": "4420" 00:19:04.695 }, 00:19:04.695 "peer_address": { 00:19:04.695 "trtype": "TCP", 00:19:04.695 "adrfam": "IPv4", 00:19:04.695 "traddr": "10.0.0.1", 00:19:04.695 "trsvcid": "51138" 00:19:04.695 }, 00:19:04.695 "auth": { 00:19:04.695 "state": "completed", 00:19:04.695 "digest": "sha384", 00:19:04.695 "dhgroup": "ffdhe6144" 00:19:04.695 } 00:19:04.695 } 00:19:04.695 ]' 00:19:04.695 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.695 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.695 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.956 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.956 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.956 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.956 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.956 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.956 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:04.956 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:05.908 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.908 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.908 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.908 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.908 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.908 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.909 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.484 00:19:06.484 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.484 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.484 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.745 { 00:19:06.745 "cntlid": 89, 00:19:06.745 "qid": 0, 00:19:06.745 "state": "enabled", 00:19:06.745 "thread": "nvmf_tgt_poll_group_000", 00:19:06.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:06.745 "listen_address": { 00:19:06.745 "trtype": "TCP", 00:19:06.745 "adrfam": "IPv4", 00:19:06.745 "traddr": "10.0.0.2", 00:19:06.745 "trsvcid": "4420" 00:19:06.745 }, 00:19:06.745 "peer_address": { 00:19:06.745 "trtype": "TCP", 00:19:06.745 "adrfam": "IPv4", 00:19:06.745 "traddr": "10.0.0.1", 00:19:06.745 "trsvcid": "51172" 00:19:06.745 }, 00:19:06.745 "auth": { 00:19:06.745 "state": "completed", 00:19:06.745 "digest": "sha384", 00:19:06.745 "dhgroup": "ffdhe8192" 00:19:06.745 } 00:19:06.745 } 00:19:06.745 ]' 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.745 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.745 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.745 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.745 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.006 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:07.006 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:07.949 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.949 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.949 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.949 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.949 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.949 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.949 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.949 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.949 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.521 00:19:08.521 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.521 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.522 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.522 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.522 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.522 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.522 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.522 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.522 { 00:19:08.522 "cntlid": 91, 00:19:08.522 "qid": 0, 00:19:08.522 "state": "enabled", 00:19:08.522 "thread": "nvmf_tgt_poll_group_000", 00:19:08.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:08.522 "listen_address": { 00:19:08.522 "trtype": "TCP", 00:19:08.522 "adrfam": "IPv4", 00:19:08.522 "traddr": "10.0.0.2", 00:19:08.522 "trsvcid": "4420" 00:19:08.522 }, 00:19:08.522 "peer_address": { 00:19:08.522 "trtype": "TCP", 00:19:08.522 "adrfam": "IPv4", 00:19:08.522 "traddr": "10.0.0.1", 00:19:08.522 "trsvcid": "51200" 00:19:08.522 }, 00:19:08.522 "auth": { 00:19:08.522 "state": "completed", 00:19:08.522 "digest": "sha384", 00:19:08.522 "dhgroup": "ffdhe8192" 00:19:08.522 } 00:19:08.522 } 00:19:08.522 ]' 00:19:08.522 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.782 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.782 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.782 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.782 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.782 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.782 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.782 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.043 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:09.043 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:09.613 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.613 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.613 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.613 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.613 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.613 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.613 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:09.613 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.873 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.445 00:19:10.445 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.445 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.445 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.707 { 00:19:10.707 "cntlid": 93, 00:19:10.707 "qid": 0, 00:19:10.707 "state": "enabled", 00:19:10.707 "thread": "nvmf_tgt_poll_group_000", 00:19:10.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:10.707 "listen_address": { 00:19:10.707 "trtype": "TCP", 00:19:10.707 "adrfam": "IPv4", 00:19:10.707 "traddr": "10.0.0.2", 00:19:10.707 "trsvcid": "4420" 00:19:10.707 }, 00:19:10.707 "peer_address": { 00:19:10.707 "trtype": "TCP", 00:19:10.707 "adrfam": "IPv4", 00:19:10.707 "traddr": "10.0.0.1", 00:19:10.707 "trsvcid": "51216" 00:19:10.707 }, 00:19:10.707 "auth": { 00:19:10.707 "state": "completed", 00:19:10.707 "digest": "sha384", 00:19:10.707 "dhgroup": "ffdhe8192" 00:19:10.707 } 00:19:10.707 } 00:19:10.707 ]' 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.707 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.707 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.707 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.707 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.968 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:10.968 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:11.908 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.908 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.908 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.908 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.908 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.908 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.908 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.908 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.908 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.478 00:19:12.478 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.478 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.478 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.738 { 00:19:12.738 "cntlid": 95, 00:19:12.738 "qid": 0, 00:19:12.738 "state": "enabled", 00:19:12.738 "thread": "nvmf_tgt_poll_group_000", 00:19:12.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:12.738 "listen_address": { 00:19:12.738 "trtype": "TCP", 00:19:12.738 "adrfam": "IPv4", 00:19:12.738 "traddr": "10.0.0.2", 00:19:12.738 "trsvcid": "4420" 00:19:12.738 }, 00:19:12.738 "peer_address": { 00:19:12.738 "trtype": "TCP", 00:19:12.738 "adrfam": "IPv4", 00:19:12.738 "traddr": "10.0.0.1", 00:19:12.738 "trsvcid": "51234" 00:19:12.738 }, 00:19:12.738 "auth": { 00:19:12.738 "state": "completed", 00:19:12.738 "digest": "sha384", 00:19:12.738 "dhgroup": "ffdhe8192" 00:19:12.738 } 00:19:12.738 } 00:19:12.738 ]' 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.738 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.738 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.738 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.738 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.997 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:12.997 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:13.567 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.568 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.828 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.088 00:19:14.088 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.088 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.088 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.349 { 00:19:14.349 "cntlid": 97, 00:19:14.349 "qid": 0, 00:19:14.349 "state": "enabled", 00:19:14.349 "thread": "nvmf_tgt_poll_group_000", 00:19:14.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:14.349 "listen_address": { 00:19:14.349 "trtype": "TCP", 00:19:14.349 "adrfam": "IPv4", 00:19:14.349 "traddr": "10.0.0.2", 00:19:14.349 "trsvcid": "4420" 00:19:14.349 }, 00:19:14.349 "peer_address": { 00:19:14.349 "trtype": "TCP", 00:19:14.349 "adrfam": "IPv4", 00:19:14.349 "traddr": "10.0.0.1", 00:19:14.349 "trsvcid": "48476" 00:19:14.349 }, 00:19:14.349 "auth": { 00:19:14.349 "state": "completed", 00:19:14.349 "digest": "sha512", 00:19:14.349 "dhgroup": "null" 00:19:14.349 } 00:19:14.349 } 00:19:14.349 ]' 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.349 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.617 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:14.617 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.384 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.645 00:19:15.645 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.645 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.645 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.906 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.906 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.906 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.906 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.906 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.906 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.906 { 00:19:15.906 "cntlid": 99, 00:19:15.906 "qid": 0, 00:19:15.906 "state": "enabled", 00:19:15.906 "thread": "nvmf_tgt_poll_group_000", 00:19:15.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:15.906 "listen_address": { 00:19:15.906 "trtype": "TCP", 00:19:15.906 "adrfam": "IPv4", 00:19:15.906 "traddr": "10.0.0.2", 00:19:15.906 "trsvcid": "4420" 00:19:15.906 }, 00:19:15.906 "peer_address": { 00:19:15.906 "trtype": "TCP", 00:19:15.906 "adrfam": "IPv4", 00:19:15.906 "traddr": "10.0.0.1", 00:19:15.906 "trsvcid": "48520" 00:19:15.906 }, 00:19:15.906 "auth": { 00:19:15.907 "state": "completed", 00:19:15.907 "digest": "sha512", 00:19:15.907 "dhgroup": "null" 00:19:15.907 } 00:19:15.907 } 00:19:15.907 ]' 00:19:15.907 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.907 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.907 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.907 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:15.907 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.167 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.167 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.167 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.167 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:16.167 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:16.738 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.998 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.999 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.259 00:19:17.259 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.259 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.259 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.519 { 00:19:17.519 "cntlid": 101, 00:19:17.519 "qid": 0, 00:19:17.519 "state": "enabled", 00:19:17.519 "thread": "nvmf_tgt_poll_group_000", 00:19:17.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:17.519 "listen_address": { 00:19:17.519 "trtype": "TCP", 00:19:17.519 "adrfam": "IPv4", 00:19:17.519 "traddr": "10.0.0.2", 00:19:17.519 "trsvcid": "4420" 00:19:17.519 }, 00:19:17.519 "peer_address": { 00:19:17.519 "trtype": "TCP", 00:19:17.519 "adrfam": "IPv4", 00:19:17.519 "traddr": "10.0.0.1", 00:19:17.519 "trsvcid": "48548" 00:19:17.519 }, 00:19:17.519 "auth": { 00:19:17.519 "state": "completed", 00:19:17.519 "digest": "sha512", 00:19:17.519 "dhgroup": "null" 00:19:17.519 } 00:19:17.519 } 00:19:17.519 ]' 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.519 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.779 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:17.779 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:18.721 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.721 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.721 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.721 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.721 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.721 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.721 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:18.721 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.721 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:18.722 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.722 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.982 00:19:18.982 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.982 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.982 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.243 { 00:19:19.243 "cntlid": 103, 00:19:19.243 "qid": 0, 00:19:19.243 "state": "enabled", 00:19:19.243 "thread": "nvmf_tgt_poll_group_000", 00:19:19.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:19.243 "listen_address": { 00:19:19.243 "trtype": "TCP", 00:19:19.243 "adrfam": "IPv4", 00:19:19.243 "traddr": "10.0.0.2", 00:19:19.243 "trsvcid": "4420" 00:19:19.243 }, 00:19:19.243 "peer_address": { 00:19:19.243 "trtype": "TCP", 00:19:19.243 "adrfam": "IPv4", 00:19:19.243 "traddr": "10.0.0.1", 00:19:19.243 "trsvcid": "48572" 00:19:19.243 }, 00:19:19.243 "auth": { 00:19:19.243 "state": "completed", 00:19:19.243 "digest": "sha512", 00:19:19.243 "dhgroup": "null" 00:19:19.243 } 00:19:19.243 } 00:19:19.243 ]' 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.243 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.503 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:19.503 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.446 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.707 00:19:20.707 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.707 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.707 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.980 { 00:19:20.980 "cntlid": 105, 00:19:20.980 "qid": 0, 00:19:20.980 "state": "enabled", 00:19:20.980 "thread": "nvmf_tgt_poll_group_000", 00:19:20.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:20.980 "listen_address": { 00:19:20.980 "trtype": "TCP", 00:19:20.980 "adrfam": "IPv4", 00:19:20.980 "traddr": "10.0.0.2", 00:19:20.980 "trsvcid": "4420" 00:19:20.980 }, 00:19:20.980 "peer_address": { 00:19:20.980 "trtype": "TCP", 00:19:20.980 "adrfam": "IPv4", 00:19:20.980 "traddr": "10.0.0.1", 00:19:20.980 "trsvcid": "48600" 00:19:20.980 }, 00:19:20.980 "auth": { 00:19:20.980 "state": "completed", 00:19:20.980 "digest": "sha512", 00:19:20.980 "dhgroup": "ffdhe2048" 00:19:20.980 } 00:19:20.980 } 00:19:20.980 ]' 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.980 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.241 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:21.241 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:21.813 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.813 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.813 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.813 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.073 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.334 00:19:22.334 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.334 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.334 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.594 { 00:19:22.594 "cntlid": 107, 00:19:22.594 "qid": 0, 00:19:22.594 "state": "enabled", 00:19:22.594 "thread": "nvmf_tgt_poll_group_000", 00:19:22.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:22.594 "listen_address": { 00:19:22.594 "trtype": "TCP", 00:19:22.594 "adrfam": "IPv4", 00:19:22.594 "traddr": "10.0.0.2", 00:19:22.594 "trsvcid": "4420" 00:19:22.594 }, 00:19:22.594 "peer_address": { 00:19:22.594 "trtype": "TCP", 00:19:22.594 "adrfam": "IPv4", 00:19:22.594 "traddr": "10.0.0.1", 00:19:22.594 "trsvcid": "48630" 00:19:22.594 }, 00:19:22.594 "auth": { 00:19:22.594 "state": "completed", 00:19:22.594 "digest": "sha512", 00:19:22.594 "dhgroup": "ffdhe2048" 00:19:22.594 } 00:19:22.594 } 00:19:22.594 ]' 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.594 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.855 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:22.855 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:23.796 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.796 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:23.796 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.796 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.796 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.796 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.796 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:23.796 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.796 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.056 00:19:24.056 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.056 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.057 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.316 { 00:19:24.316 "cntlid": 109, 00:19:24.316 "qid": 0, 00:19:24.316 "state": "enabled", 00:19:24.316 "thread": "nvmf_tgt_poll_group_000", 00:19:24.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:24.316 "listen_address": { 00:19:24.316 "trtype": "TCP", 00:19:24.316 "adrfam": "IPv4", 00:19:24.316 "traddr": "10.0.0.2", 00:19:24.316 "trsvcid": "4420" 00:19:24.316 }, 00:19:24.316 "peer_address": { 00:19:24.316 "trtype": "TCP", 00:19:24.316 "adrfam": "IPv4", 00:19:24.316 "traddr": "10.0.0.1", 00:19:24.316 "trsvcid": "58478" 00:19:24.316 }, 00:19:24.316 "auth": { 00:19:24.316 "state": "completed", 00:19:24.316 "digest": "sha512", 00:19:24.316 "dhgroup": "ffdhe2048" 00:19:24.316 } 00:19:24.316 } 00:19:24.316 ]' 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.316 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.317 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.317 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.317 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.317 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.317 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.577 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:24.577 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:25.518 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.519 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:25.519 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.519 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.519 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.519 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:25.519 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.519 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:25.778 00:19:25.778 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.778 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.778 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.038 { 00:19:26.038 "cntlid": 111, 00:19:26.038 "qid": 0, 00:19:26.038 "state": "enabled", 00:19:26.038 "thread": "nvmf_tgt_poll_group_000", 00:19:26.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:26.038 "listen_address": { 00:19:26.038 "trtype": "TCP", 00:19:26.038 "adrfam": "IPv4", 00:19:26.038 "traddr": "10.0.0.2", 00:19:26.038 "trsvcid": "4420" 00:19:26.038 }, 00:19:26.038 "peer_address": { 00:19:26.038 "trtype": "TCP", 00:19:26.038 "adrfam": "IPv4", 00:19:26.038 "traddr": "10.0.0.1", 00:19:26.038 "trsvcid": "58508" 00:19:26.038 }, 00:19:26.038 "auth": { 00:19:26.038 "state": "completed", 00:19:26.038 "digest": "sha512", 00:19:26.038 "dhgroup": "ffdhe2048" 00:19:26.038 } 00:19:26.038 } 00:19:26.038 ]' 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.038 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.298 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:26.298 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.239 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.499 00:19:27.499 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.499 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.500 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.760 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.760 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.760 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.760 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.760 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.760 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.760 { 00:19:27.760 "cntlid": 113, 00:19:27.760 "qid": 0, 00:19:27.760 "state": "enabled", 00:19:27.760 "thread": "nvmf_tgt_poll_group_000", 00:19:27.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:27.760 "listen_address": { 00:19:27.760 "trtype": "TCP", 00:19:27.760 "adrfam": "IPv4", 00:19:27.760 "traddr": "10.0.0.2", 00:19:27.760 "trsvcid": "4420" 00:19:27.760 }, 00:19:27.760 "peer_address": { 00:19:27.760 "trtype": "TCP", 00:19:27.760 "adrfam": "IPv4", 00:19:27.760 "traddr": "10.0.0.1", 00:19:27.760 "trsvcid": "58530" 00:19:27.760 }, 00:19:27.760 "auth": { 00:19:27.760 "state": "completed", 00:19:27.760 "digest": "sha512", 00:19:27.760 "dhgroup": "ffdhe3072" 00:19:27.760 } 00:19:27.760 } 00:19:27.760 ]' 00:19:27.760 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.760 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.760 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.760 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.760 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.760 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.760 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.760 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.020 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:28.020 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.961 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.221 00:19:29.221 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.221 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.221 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.482 { 00:19:29.482 "cntlid": 115, 00:19:29.482 "qid": 0, 00:19:29.482 "state": "enabled", 00:19:29.482 "thread": "nvmf_tgt_poll_group_000", 00:19:29.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:29.482 "listen_address": { 00:19:29.482 "trtype": "TCP", 00:19:29.482 "adrfam": "IPv4", 00:19:29.482 "traddr": "10.0.0.2", 00:19:29.482 "trsvcid": "4420" 00:19:29.482 }, 00:19:29.482 "peer_address": { 00:19:29.482 "trtype": "TCP", 00:19:29.482 "adrfam": "IPv4", 00:19:29.482 "traddr": "10.0.0.1", 00:19:29.482 "trsvcid": "58556" 00:19:29.482 }, 00:19:29.482 "auth": { 00:19:29.482 "state": "completed", 00:19:29.482 "digest": "sha512", 00:19:29.482 "dhgroup": "ffdhe3072" 00:19:29.482 } 00:19:29.482 } 00:19:29.482 ]' 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.482 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.742 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:29.742 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.684 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.944 00:19:30.944 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.944 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.944 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.206 { 00:19:31.206 "cntlid": 117, 00:19:31.206 "qid": 0, 00:19:31.206 "state": "enabled", 00:19:31.206 "thread": "nvmf_tgt_poll_group_000", 00:19:31.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:31.206 "listen_address": { 00:19:31.206 "trtype": "TCP", 00:19:31.206 "adrfam": "IPv4", 00:19:31.206 "traddr": "10.0.0.2", 00:19:31.206 "trsvcid": "4420" 00:19:31.206 }, 00:19:31.206 "peer_address": { 00:19:31.206 "trtype": "TCP", 00:19:31.206 "adrfam": "IPv4", 00:19:31.206 "traddr": "10.0.0.1", 00:19:31.206 "trsvcid": "58590" 00:19:31.206 }, 00:19:31.206 "auth": { 00:19:31.206 "state": "completed", 00:19:31.206 "digest": "sha512", 00:19:31.206 "dhgroup": "ffdhe3072" 00:19:31.206 } 00:19:31.206 } 00:19:31.206 ]' 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.206 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.467 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:31.467 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:32.039 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.039 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.039 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.039 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.301 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.562 00:19:32.562 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.562 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.562 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.824 { 00:19:32.824 "cntlid": 119, 00:19:32.824 "qid": 0, 00:19:32.824 "state": "enabled", 00:19:32.824 "thread": "nvmf_tgt_poll_group_000", 00:19:32.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:32.824 "listen_address": { 00:19:32.824 "trtype": "TCP", 00:19:32.824 "adrfam": "IPv4", 00:19:32.824 "traddr": "10.0.0.2", 00:19:32.824 "trsvcid": "4420" 00:19:32.824 }, 00:19:32.824 "peer_address": { 00:19:32.824 "trtype": "TCP", 00:19:32.824 "adrfam": "IPv4", 00:19:32.824 "traddr": "10.0.0.1", 00:19:32.824 "trsvcid": "58630" 00:19:32.824 }, 00:19:32.824 "auth": { 00:19:32.824 "state": "completed", 00:19:32.824 "digest": "sha512", 00:19:32.824 "dhgroup": "ffdhe3072" 00:19:32.824 } 00:19:32.824 } 00:19:32.824 ]' 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.824 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.086 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:33.086 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.027 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.287 00:19:34.287 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.287 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.287 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.549 { 00:19:34.549 "cntlid": 121, 00:19:34.549 "qid": 0, 00:19:34.549 "state": "enabled", 00:19:34.549 "thread": "nvmf_tgt_poll_group_000", 00:19:34.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:34.549 "listen_address": { 00:19:34.549 "trtype": "TCP", 00:19:34.549 "adrfam": "IPv4", 00:19:34.549 "traddr": "10.0.0.2", 00:19:34.549 "trsvcid": "4420" 00:19:34.549 }, 00:19:34.549 "peer_address": { 00:19:34.549 "trtype": "TCP", 00:19:34.549 "adrfam": "IPv4", 00:19:34.549 "traddr": "10.0.0.1", 00:19:34.549 "trsvcid": "54972" 00:19:34.549 }, 00:19:34.549 "auth": { 00:19:34.549 "state": "completed", 00:19:34.549 "digest": "sha512", 00:19:34.549 "dhgroup": "ffdhe4096" 00:19:34.549 } 00:19:34.549 } 00:19:34.549 ]' 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.549 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.812 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:34.812 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:35.385 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.645 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.905 00:19:35.905 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.905 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.905 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.166 { 00:19:36.166 "cntlid": 123, 00:19:36.166 "qid": 0, 00:19:36.166 "state": "enabled", 00:19:36.166 "thread": "nvmf_tgt_poll_group_000", 00:19:36.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:36.166 "listen_address": { 00:19:36.166 "trtype": "TCP", 00:19:36.166 "adrfam": "IPv4", 00:19:36.166 "traddr": "10.0.0.2", 00:19:36.166 "trsvcid": "4420" 00:19:36.166 }, 00:19:36.166 "peer_address": { 00:19:36.166 "trtype": "TCP", 00:19:36.166 "adrfam": "IPv4", 00:19:36.166 "traddr": "10.0.0.1", 00:19:36.166 "trsvcid": "55004" 00:19:36.166 }, 00:19:36.166 "auth": { 00:19:36.166 "state": "completed", 00:19:36.166 "digest": "sha512", 00:19:36.166 "dhgroup": "ffdhe4096" 00:19:36.166 } 00:19:36.166 } 00:19:36.166 ]' 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.166 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.427 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.427 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.427 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.427 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.427 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.427 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:36.427 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.368 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.628 00:19:37.889 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.889 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.889 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.889 { 00:19:37.889 "cntlid": 125, 00:19:37.889 "qid": 0, 00:19:37.889 "state": "enabled", 00:19:37.889 "thread": "nvmf_tgt_poll_group_000", 00:19:37.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:37.889 "listen_address": { 00:19:37.889 "trtype": "TCP", 00:19:37.889 "adrfam": "IPv4", 00:19:37.889 "traddr": "10.0.0.2", 00:19:37.889 "trsvcid": "4420" 00:19:37.889 }, 00:19:37.889 "peer_address": { 00:19:37.889 "trtype": "TCP", 00:19:37.889 "adrfam": "IPv4", 00:19:37.889 "traddr": "10.0.0.1", 00:19:37.889 "trsvcid": "55024" 00:19:37.889 }, 00:19:37.889 "auth": { 00:19:37.889 "state": "completed", 00:19:37.889 "digest": "sha512", 00:19:37.889 "dhgroup": "ffdhe4096" 00:19:37.889 } 00:19:37.889 } 00:19:37.889 ]' 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.889 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.150 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.150 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.150 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.150 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.150 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.150 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:38.150 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.091 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.351 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.351 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.612 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.612 { 00:19:39.612 "cntlid": 127, 00:19:39.612 "qid": 0, 00:19:39.612 "state": "enabled", 00:19:39.612 "thread": "nvmf_tgt_poll_group_000", 00:19:39.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:39.612 "listen_address": { 00:19:39.612 "trtype": "TCP", 00:19:39.612 "adrfam": "IPv4", 00:19:39.612 "traddr": "10.0.0.2", 00:19:39.613 "trsvcid": "4420" 00:19:39.613 }, 00:19:39.613 "peer_address": { 00:19:39.613 "trtype": "TCP", 00:19:39.613 "adrfam": "IPv4", 00:19:39.613 "traddr": "10.0.0.1", 00:19:39.613 "trsvcid": "55050" 00:19:39.613 }, 00:19:39.613 "auth": { 00:19:39.613 "state": "completed", 00:19:39.613 "digest": "sha512", 00:19:39.613 "dhgroup": "ffdhe4096" 00:19:39.613 } 00:19:39.613 } 00:19:39.613 ]' 00:19:39.613 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.613 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.613 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.873 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:39.873 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.873 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.873 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.873 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.873 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:39.873 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:40.814 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.814 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:40.815 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.815 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.815 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.815 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.815 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.815 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.815 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.815 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:40.815 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.815 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:40.815 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:40.815 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:40.815 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.075 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.076 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.076 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.076 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.076 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.076 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.076 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.336 00:19:41.336 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.336 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.336 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.597 { 00:19:41.597 "cntlid": 129, 00:19:41.597 "qid": 0, 00:19:41.597 "state": "enabled", 00:19:41.597 "thread": "nvmf_tgt_poll_group_000", 00:19:41.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:41.597 "listen_address": { 00:19:41.597 "trtype": "TCP", 00:19:41.597 "adrfam": "IPv4", 00:19:41.597 "traddr": "10.0.0.2", 00:19:41.597 "trsvcid": "4420" 00:19:41.597 }, 00:19:41.597 "peer_address": { 00:19:41.597 "trtype": "TCP", 00:19:41.597 "adrfam": "IPv4", 00:19:41.597 "traddr": "10.0.0.1", 00:19:41.597 "trsvcid": "55076" 00:19:41.597 }, 00:19:41.597 "auth": { 00:19:41.597 "state": "completed", 00:19:41.597 "digest": "sha512", 00:19:41.597 "dhgroup": "ffdhe6144" 00:19:41.597 } 00:19:41.597 } 00:19:41.597 ]' 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.597 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.857 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:41.857 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:42.800 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.800 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.800 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.800 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.800 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.800 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.800 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.800 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.800 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.061 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.323 { 00:19:43.323 "cntlid": 131, 00:19:43.323 "qid": 0, 00:19:43.323 "state": "enabled", 00:19:43.323 "thread": "nvmf_tgt_poll_group_000", 00:19:43.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:43.323 "listen_address": { 00:19:43.323 "trtype": "TCP", 00:19:43.323 "adrfam": "IPv4", 00:19:43.323 "traddr": "10.0.0.2", 00:19:43.323 "trsvcid": "4420" 00:19:43.323 }, 00:19:43.323 "peer_address": { 00:19:43.323 "trtype": "TCP", 00:19:43.323 "adrfam": "IPv4", 00:19:43.323 "traddr": "10.0.0.1", 00:19:43.323 "trsvcid": "55094" 00:19:43.323 }, 00:19:43.323 "auth": { 00:19:43.323 "state": "completed", 00:19:43.323 "digest": "sha512", 00:19:43.323 "dhgroup": "ffdhe6144" 00:19:43.323 } 00:19:43.323 } 00:19:43.323 ]' 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.323 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.584 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.584 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.584 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.584 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.584 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.584 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:43.584 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:44.526 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.526 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.526 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.526 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.526 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.526 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.526 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.526 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.787 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:44.787 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.787 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.787 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:44.787 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.787 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.787 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.787 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.788 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.788 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.788 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.788 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.788 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.048 00:19:45.048 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.048 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.048 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.309 { 00:19:45.309 "cntlid": 133, 00:19:45.309 "qid": 0, 00:19:45.309 "state": "enabled", 00:19:45.309 "thread": "nvmf_tgt_poll_group_000", 00:19:45.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:45.309 "listen_address": { 00:19:45.309 "trtype": "TCP", 00:19:45.309 "adrfam": "IPv4", 00:19:45.309 "traddr": "10.0.0.2", 00:19:45.309 "trsvcid": "4420" 00:19:45.309 }, 00:19:45.309 "peer_address": { 00:19:45.309 "trtype": "TCP", 00:19:45.309 "adrfam": "IPv4", 00:19:45.309 "traddr": "10.0.0.1", 00:19:45.309 "trsvcid": "59800" 00:19:45.309 }, 00:19:45.309 "auth": { 00:19:45.309 "state": "completed", 00:19:45.309 "digest": "sha512", 00:19:45.309 "dhgroup": "ffdhe6144" 00:19:45.309 } 00:19:45.309 } 00:19:45.309 ]' 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.309 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.570 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:45.570 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.514 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.775 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.037 { 00:19:47.037 "cntlid": 135, 00:19:47.037 "qid": 0, 00:19:47.037 "state": "enabled", 00:19:47.037 "thread": "nvmf_tgt_poll_group_000", 00:19:47.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:47.037 "listen_address": { 00:19:47.037 "trtype": "TCP", 00:19:47.037 "adrfam": "IPv4", 00:19:47.037 "traddr": "10.0.0.2", 00:19:47.037 "trsvcid": "4420" 00:19:47.037 }, 00:19:47.037 "peer_address": { 00:19:47.037 "trtype": "TCP", 00:19:47.037 "adrfam": "IPv4", 00:19:47.037 "traddr": "10.0.0.1", 00:19:47.037 "trsvcid": "59828" 00:19:47.037 }, 00:19:47.037 "auth": { 00:19:47.037 "state": "completed", 00:19:47.037 "digest": "sha512", 00:19:47.037 "dhgroup": "ffdhe6144" 00:19:47.037 } 00:19:47.037 } 00:19:47.037 ]' 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.037 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.299 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.299 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.299 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.299 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.299 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.559 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:47.559 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.131 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.392 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.964 00:19:48.964 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.964 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.964 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.225 { 00:19:49.225 "cntlid": 137, 00:19:49.225 "qid": 0, 00:19:49.225 "state": "enabled", 00:19:49.225 "thread": "nvmf_tgt_poll_group_000", 00:19:49.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:49.225 "listen_address": { 00:19:49.225 "trtype": "TCP", 00:19:49.225 "adrfam": "IPv4", 00:19:49.225 "traddr": "10.0.0.2", 00:19:49.225 "trsvcid": "4420" 00:19:49.225 }, 00:19:49.225 "peer_address": { 00:19:49.225 "trtype": "TCP", 00:19:49.225 "adrfam": "IPv4", 00:19:49.225 "traddr": "10.0.0.1", 00:19:49.225 "trsvcid": "59848" 00:19:49.225 }, 00:19:49.225 "auth": { 00:19:49.225 "state": "completed", 00:19:49.225 "digest": "sha512", 00:19:49.225 "dhgroup": "ffdhe8192" 00:19:49.225 } 00:19:49.225 } 00:19:49.225 ]' 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.225 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.487 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:49.487 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:50.058 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.319 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.890 00:19:50.891 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.891 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.891 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.151 { 00:19:51.151 "cntlid": 139, 00:19:51.151 "qid": 0, 00:19:51.151 "state": "enabled", 00:19:51.151 "thread": "nvmf_tgt_poll_group_000", 00:19:51.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:51.151 "listen_address": { 00:19:51.151 "trtype": "TCP", 00:19:51.151 "adrfam": "IPv4", 00:19:51.151 "traddr": "10.0.0.2", 00:19:51.151 "trsvcid": "4420" 00:19:51.151 }, 00:19:51.151 "peer_address": { 00:19:51.151 "trtype": "TCP", 00:19:51.151 "adrfam": "IPv4", 00:19:51.151 "traddr": "10.0.0.1", 00:19:51.151 "trsvcid": "59872" 00:19:51.151 }, 00:19:51.151 "auth": { 00:19:51.151 "state": "completed", 00:19:51.151 "digest": "sha512", 00:19:51.151 "dhgroup": "ffdhe8192" 00:19:51.151 } 00:19:51.151 } 00:19:51.151 ]' 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.151 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.410 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:51.410 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: --dhchap-ctrl-secret DHHC-1:02:MzEyMWVlNDYxMzUzM2Q5ZjdhYzNhYmFjOTY0YTdiODY2OTJhYmViYzJkMWNmNTRlrAFqlQ==: 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.352 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.922 00:19:52.922 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.922 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.922 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.182 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.182 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.182 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.182 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.182 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.182 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.182 { 00:19:53.182 "cntlid": 141, 00:19:53.182 "qid": 0, 00:19:53.182 "state": "enabled", 00:19:53.182 "thread": "nvmf_tgt_poll_group_000", 00:19:53.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:53.182 "listen_address": { 00:19:53.182 "trtype": "TCP", 00:19:53.182 "adrfam": "IPv4", 00:19:53.182 "traddr": "10.0.0.2", 00:19:53.182 "trsvcid": "4420" 00:19:53.182 }, 00:19:53.182 "peer_address": { 00:19:53.182 "trtype": "TCP", 00:19:53.182 "adrfam": "IPv4", 00:19:53.182 "traddr": "10.0.0.1", 00:19:53.182 "trsvcid": "59898" 00:19:53.182 }, 00:19:53.182 "auth": { 00:19:53.182 "state": "completed", 00:19:53.182 "digest": "sha512", 00:19:53.182 "dhgroup": "ffdhe8192" 00:19:53.182 } 00:19:53.182 } 00:19:53.182 ]' 00:19:53.182 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.183 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.183 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.183 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.183 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.183 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.183 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.183 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.443 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:53.443 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:01:YmUyMjQ5MTMwNjk3N2I2YmZhZDk2ZWQ4MmQ1ZGUyMGZDlYwe: 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.510 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:54.511 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.511 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.511 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.511 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.511 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.511 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.106 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.106 { 00:19:55.106 "cntlid": 143, 00:19:55.106 "qid": 0, 00:19:55.106 "state": "enabled", 00:19:55.106 "thread": "nvmf_tgt_poll_group_000", 00:19:55.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:55.106 "listen_address": { 00:19:55.106 "trtype": "TCP", 00:19:55.106 "adrfam": "IPv4", 00:19:55.106 "traddr": "10.0.0.2", 00:19:55.106 "trsvcid": "4420" 00:19:55.106 }, 00:19:55.106 "peer_address": { 00:19:55.106 "trtype": "TCP", 00:19:55.106 "adrfam": "IPv4", 00:19:55.106 "traddr": "10.0.0.1", 00:19:55.106 "trsvcid": "50330" 00:19:55.106 }, 00:19:55.106 "auth": { 00:19:55.106 "state": "completed", 00:19:55.106 "digest": "sha512", 00:19:55.106 "dhgroup": "ffdhe8192" 00:19:55.106 } 00:19:55.106 } 00:19:55.106 ]' 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.106 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.368 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.368 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.368 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.368 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:55.368 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.312 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.885 00:19:56.885 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.885 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.885 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.146 { 00:19:57.146 "cntlid": 145, 00:19:57.146 "qid": 0, 00:19:57.146 "state": "enabled", 00:19:57.146 "thread": "nvmf_tgt_poll_group_000", 00:19:57.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:57.146 "listen_address": { 00:19:57.146 "trtype": "TCP", 00:19:57.146 "adrfam": "IPv4", 00:19:57.146 "traddr": "10.0.0.2", 00:19:57.146 "trsvcid": "4420" 00:19:57.146 }, 00:19:57.146 "peer_address": { 00:19:57.146 "trtype": "TCP", 00:19:57.146 "adrfam": "IPv4", 00:19:57.146 "traddr": "10.0.0.1", 00:19:57.146 "trsvcid": "50366" 00:19:57.146 }, 00:19:57.146 "auth": { 00:19:57.146 "state": "completed", 00:19:57.146 "digest": "sha512", 00:19:57.146 "dhgroup": "ffdhe8192" 00:19:57.146 } 00:19:57.146 } 00:19:57.146 ]' 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.146 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.406 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.406 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.406 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.406 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:57.406 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTRmNzRlYjIxYzRmNWUzYjYyMjdkODNmNmVlNTdmYzM1OTcyOTZhZDNjNDQxOTcwbMb38w==: --dhchap-ctrl-secret DHHC-1:03:NmJkYTZmYTM1ZDhiMGU4MDNkN2YwMDQ5MDg5MmFlN2Y1NzE4YjQ3MDM4YTMzZGQzNDdhYTYzNDcxOTZlYWUyY90nrPI=: 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:58.346 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:58.608 request: 00:19:58.608 { 00:19:58.608 "name": "nvme0", 00:19:58.608 "trtype": "tcp", 00:19:58.608 "traddr": "10.0.0.2", 00:19:58.608 "adrfam": "ipv4", 00:19:58.608 "trsvcid": "4420", 00:19:58.608 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:58.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:58.608 "prchk_reftag": false, 00:19:58.608 "prchk_guard": false, 00:19:58.608 "hdgst": false, 00:19:58.608 "ddgst": false, 00:19:58.608 "dhchap_key": "key2", 00:19:58.608 "allow_unrecognized_csi": false, 00:19:58.608 "method": "bdev_nvme_attach_controller", 00:19:58.608 "req_id": 1 00:19:58.608 } 00:19:58.608 Got JSON-RPC error response 00:19:58.608 response: 00:19:58.608 { 00:19:58.608 "code": -5, 00:19:58.608 "message": "Input/output error" 00:19:58.608 } 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:58.868 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:58.868 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.129 request: 00:19:59.129 { 00:19:59.129 "name": "nvme0", 00:19:59.129 "trtype": "tcp", 00:19:59.129 "traddr": "10.0.0.2", 00:19:59.129 "adrfam": "ipv4", 00:19:59.129 "trsvcid": "4420", 00:19:59.129 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:59.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:59.129 "prchk_reftag": false, 00:19:59.129 "prchk_guard": false, 00:19:59.129 "hdgst": false, 00:19:59.129 "ddgst": false, 00:19:59.129 "dhchap_key": "key1", 00:19:59.129 "dhchap_ctrlr_key": "ckey2", 00:19:59.129 "allow_unrecognized_csi": false, 00:19:59.129 "method": "bdev_nvme_attach_controller", 00:19:59.129 "req_id": 1 00:19:59.129 } 00:19:59.129 Got JSON-RPC error response 00:19:59.129 response: 00:19:59.129 { 00:19:59.129 "code": -5, 00:19:59.129 "message": "Input/output error" 00:19:59.129 } 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.389 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.960 request: 00:19:59.960 { 00:19:59.960 "name": "nvme0", 00:19:59.960 "trtype": "tcp", 00:19:59.960 "traddr": "10.0.0.2", 00:19:59.960 "adrfam": "ipv4", 00:19:59.960 "trsvcid": "4420", 00:19:59.960 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:59.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:59.960 "prchk_reftag": false, 00:19:59.960 "prchk_guard": false, 00:19:59.960 "hdgst": false, 00:19:59.960 "ddgst": false, 00:19:59.960 "dhchap_key": "key1", 00:19:59.960 "dhchap_ctrlr_key": "ckey1", 00:19:59.960 "allow_unrecognized_csi": false, 00:19:59.960 "method": "bdev_nvme_attach_controller", 00:19:59.960 "req_id": 1 00:19:59.960 } 00:19:59.960 Got JSON-RPC error response 00:19:59.960 response: 00:19:59.960 { 00:19:59.960 "code": -5, 00:19:59.960 "message": "Input/output error" 00:19:59.960 } 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 4105570 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4105570 ']' 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4105570 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4105570 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4105570' 00:19:59.960 killing process with pid 4105570 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4105570 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4105570 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=4133513 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 4133513 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4133513 ']' 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.960 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 4133513 00:20:00.902 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4133513 ']' 00:20:00.903 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.903 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.903 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.903 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.903 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.163 null0 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rMH 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.7Wz ]] 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Wz 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5NW 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.163 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.5l1 ]] 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5l1 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0A1 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.sfQ ]] 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sfQ 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ucz 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.164 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.104 nvme0n1 00:20:02.104 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.104 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.104 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.366 { 00:20:02.366 "cntlid": 1, 00:20:02.366 "qid": 0, 00:20:02.366 "state": "enabled", 00:20:02.366 "thread": "nvmf_tgt_poll_group_000", 00:20:02.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:02.366 "listen_address": { 00:20:02.366 "trtype": "TCP", 00:20:02.366 "adrfam": "IPv4", 00:20:02.366 "traddr": "10.0.0.2", 00:20:02.366 "trsvcid": "4420" 00:20:02.366 }, 00:20:02.366 "peer_address": { 00:20:02.366 "trtype": "TCP", 00:20:02.366 "adrfam": "IPv4", 00:20:02.366 "traddr": "10.0.0.1", 00:20:02.366 "trsvcid": "50434" 00:20:02.366 }, 00:20:02.366 "auth": { 00:20:02.366 "state": "completed", 00:20:02.366 "digest": "sha512", 00:20:02.366 "dhgroup": "ffdhe8192" 00:20:02.366 } 00:20:02.366 } 00:20:02.366 ]' 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.366 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.627 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:20:02.627 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:03.570 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.571 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.571 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.571 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.831 request: 00:20:03.831 { 00:20:03.831 "name": "nvme0", 00:20:03.831 "trtype": "tcp", 00:20:03.831 "traddr": "10.0.0.2", 00:20:03.831 "adrfam": "ipv4", 00:20:03.831 "trsvcid": "4420", 00:20:03.831 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:03.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:03.831 "prchk_reftag": false, 00:20:03.831 "prchk_guard": false, 00:20:03.831 "hdgst": false, 00:20:03.831 "ddgst": false, 00:20:03.831 "dhchap_key": "key3", 00:20:03.831 "allow_unrecognized_csi": false, 00:20:03.831 "method": "bdev_nvme_attach_controller", 00:20:03.831 "req_id": 1 00:20:03.831 } 00:20:03.831 Got JSON-RPC error response 00:20:03.831 response: 00:20:03.831 { 00:20:03.831 "code": -5, 00:20:03.831 "message": "Input/output error" 00:20:03.831 } 00:20:03.831 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:03.831 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:03.831 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:03.831 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:03.831 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:03.831 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:03.831 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:03.831 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.092 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.092 request: 00:20:04.093 { 00:20:04.093 "name": "nvme0", 00:20:04.093 "trtype": "tcp", 00:20:04.093 "traddr": "10.0.0.2", 00:20:04.093 "adrfam": "ipv4", 00:20:04.093 "trsvcid": "4420", 00:20:04.093 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:04.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:04.093 "prchk_reftag": false, 00:20:04.093 "prchk_guard": false, 00:20:04.093 "hdgst": false, 00:20:04.093 "ddgst": false, 00:20:04.093 "dhchap_key": "key3", 00:20:04.093 "allow_unrecognized_csi": false, 00:20:04.093 "method": "bdev_nvme_attach_controller", 00:20:04.093 "req_id": 1 00:20:04.093 } 00:20:04.093 Got JSON-RPC error response 00:20:04.093 response: 00:20:04.093 { 00:20:04.093 "code": -5, 00:20:04.093 "message": "Input/output error" 00:20:04.093 } 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:04.093 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:04.354 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:04.615 request: 00:20:04.615 { 00:20:04.615 "name": "nvme0", 00:20:04.615 "trtype": "tcp", 00:20:04.615 "traddr": "10.0.0.2", 00:20:04.615 "adrfam": "ipv4", 00:20:04.615 "trsvcid": "4420", 00:20:04.615 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:04.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:04.615 "prchk_reftag": false, 00:20:04.615 "prchk_guard": false, 00:20:04.615 "hdgst": false, 00:20:04.615 "ddgst": false, 00:20:04.615 "dhchap_key": "key0", 00:20:04.615 "dhchap_ctrlr_key": "key1", 00:20:04.615 "allow_unrecognized_csi": false, 00:20:04.615 "method": "bdev_nvme_attach_controller", 00:20:04.615 "req_id": 1 00:20:04.615 } 00:20:04.615 Got JSON-RPC error response 00:20:04.615 response: 00:20:04.615 { 00:20:04.615 "code": -5, 00:20:04.615 "message": "Input/output error" 00:20:04.615 } 00:20:04.615 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:04.615 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.615 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.615 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.615 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:04.615 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:04.615 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:04.876 nvme0n1 00:20:04.876 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:04.876 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:04.876 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.137 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.137 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.137 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.398 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:05.398 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.398 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.398 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.398 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:05.398 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:05.398 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:06.341 nvme0n1 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.341 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:06.602 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.602 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:20:06.602 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: --dhchap-ctrl-secret DHHC-1:03:YzkyM2RhOGJlZTMxMTZkNGMxY2UyZjY0ZTUyOGFlMGIyZDZhZGI1ZWE5M2YxNDFkMmI4YzI1ZWMwMzE5M2MyNm5R58c=: 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:07.546 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:08.119 request: 00:20:08.119 { 00:20:08.119 "name": "nvme0", 00:20:08.119 "trtype": "tcp", 00:20:08.119 "traddr": "10.0.0.2", 00:20:08.119 "adrfam": "ipv4", 00:20:08.119 "trsvcid": "4420", 00:20:08.119 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:08.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:08.119 "prchk_reftag": false, 00:20:08.119 "prchk_guard": false, 00:20:08.119 "hdgst": false, 00:20:08.119 "ddgst": false, 00:20:08.119 "dhchap_key": "key1", 00:20:08.119 "allow_unrecognized_csi": false, 00:20:08.119 "method": "bdev_nvme_attach_controller", 00:20:08.119 "req_id": 1 00:20:08.119 } 00:20:08.119 Got JSON-RPC error response 00:20:08.119 response: 00:20:08.119 { 00:20:08.119 "code": -5, 00:20:08.119 "message": "Input/output error" 00:20:08.119 } 00:20:08.119 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:08.119 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.119 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.119 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.119 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:08.119 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:08.119 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:08.690 nvme0n1 00:20:08.951 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:08.951 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:08.951 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.951 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.951 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.951 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.212 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.212 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.212 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.212 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:09.212 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:09.212 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:09.474 nvme0n1 00:20:09.474 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:09.474 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:09.474 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.736 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.736 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.736 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: '' 2s 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: ]] 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGU1ZmM2YzkzMTYxMWUwOWI2YzgzODFkOWMxZTBiYTi2KG3a: 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:09.736 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: 2s 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: ]] 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTQyNmM0NGVkN2Y1YzczYzQ1N2UzN2YyODNlYjNiZjJkMjU4MDljZDI3NGQ1YWQ5S9ZWNg==: 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:12.283 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:14.198 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:14.199 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:14.771 nvme0n1 00:20:14.771 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:14.771 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.771 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.771 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.771 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:14.771 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:15.345 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:15.345 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.345 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:15.606 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:15.868 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:16.441 request: 00:20:16.441 { 00:20:16.441 "name": "nvme0", 00:20:16.441 "dhchap_key": "key1", 00:20:16.441 "dhchap_ctrlr_key": "key3", 00:20:16.441 "method": "bdev_nvme_set_keys", 00:20:16.441 "req_id": 1 00:20:16.441 } 00:20:16.441 Got JSON-RPC error response 00:20:16.441 response: 00:20:16.441 { 00:20:16.441 "code": -13, 00:20:16.441 "message": "Permission denied" 00:20:16.441 } 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:16.441 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:17.828 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:18.773 nvme0n1 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:18.773 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:19.034 request: 00:20:19.034 { 00:20:19.034 "name": "nvme0", 00:20:19.034 "dhchap_key": "key2", 00:20:19.034 "dhchap_ctrlr_key": "key0", 00:20:19.034 "method": "bdev_nvme_set_keys", 00:20:19.034 "req_id": 1 00:20:19.034 } 00:20:19.034 Got JSON-RPC error response 00:20:19.034 response: 00:20:19.034 { 00:20:19.034 "code": -13, 00:20:19.034 "message": "Permission denied" 00:20:19.034 } 00:20:19.035 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:19.035 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:19.035 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:19.035 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:19.035 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:19.035 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:19.035 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.295 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:19.295 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:20.240 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:20.240 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:20.240 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4105903 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4105903 ']' 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4105903 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4105903 00:20:20.501 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:20.502 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:20.502 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4105903' 00:20:20.502 killing process with pid 4105903 00:20:20.502 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4105903 00:20:20.502 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4105903 00:20:20.764 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:20.764 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.764 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:20.764 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.764 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:20.764 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.764 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.764 rmmod nvme_tcp 00:20:20.764 rmmod nvme_fabrics 00:20:20.764 rmmod nvme_keyring 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 4133513 ']' 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 4133513 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4133513 ']' 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4133513 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4133513 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4133513' 00:20:20.764 killing process with pid 4133513 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4133513 00:20:20.764 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4133513 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.025 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rMH /tmp/spdk.key-sha256.5NW /tmp/spdk.key-sha384.0A1 /tmp/spdk.key-sha512.ucz /tmp/spdk.key-sha512.7Wz /tmp/spdk.key-sha384.5l1 /tmp/spdk.key-sha256.sfQ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:23.011 00:20:23.011 real 2m46.842s 00:20:23.011 user 6m10.035s 00:20:23.011 sys 0m25.293s 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.011 ************************************ 00:20:23.011 END TEST nvmf_auth_target 00:20:23.011 ************************************ 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.011 11:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 ************************************ 00:20:23.280 START TEST nvmf_bdevio_no_huge 00:20:23.280 ************************************ 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:23.280 * Looking for test storage... 00:20:23.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:23.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.280 --rc genhtml_branch_coverage=1 00:20:23.280 --rc genhtml_function_coverage=1 00:20:23.280 --rc genhtml_legend=1 00:20:23.280 --rc geninfo_all_blocks=1 00:20:23.280 --rc geninfo_unexecuted_blocks=1 00:20:23.280 00:20:23.280 ' 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:23.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.280 --rc genhtml_branch_coverage=1 00:20:23.280 --rc genhtml_function_coverage=1 00:20:23.280 --rc genhtml_legend=1 00:20:23.280 --rc geninfo_all_blocks=1 00:20:23.280 --rc geninfo_unexecuted_blocks=1 00:20:23.280 00:20:23.280 ' 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:23.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.280 --rc genhtml_branch_coverage=1 00:20:23.280 --rc genhtml_function_coverage=1 00:20:23.280 --rc genhtml_legend=1 00:20:23.280 --rc geninfo_all_blocks=1 00:20:23.280 --rc geninfo_unexecuted_blocks=1 00:20:23.280 00:20:23.280 ' 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:23.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.280 --rc genhtml_branch_coverage=1 00:20:23.280 --rc genhtml_function_coverage=1 00:20:23.280 --rc genhtml_legend=1 00:20:23.280 --rc geninfo_all_blocks=1 00:20:23.280 --rc geninfo_unexecuted_blocks=1 00:20:23.280 00:20:23.280 ' 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.280 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.281 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:31.421 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:31.421 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:31.422 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:31.422 Found net devices under 0000:31:00.0: cvl_0_0 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:31.422 Found net devices under 0000:31:00.1: cvl_0_1 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.422 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:31.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:20:31.681 00:20:31.681 --- 10.0.0.2 ping statistics --- 00:20:31.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.681 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:20:31.681 00:20:31.681 --- 10.0.0.1 ping statistics --- 00:20:31.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.681 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:31.681 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=4142381 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 4142381 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 4142381 ']' 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.681 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.682 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.682 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:31.942 [2024-11-19 11:14:40.073296] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:20:31.942 [2024-11-19 11:14:40.073347] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:31.942 [2024-11-19 11:14:40.180872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.942 [2024-11-19 11:14:40.232597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.942 [2024-11-19 11:14:40.232629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.942 [2024-11-19 11:14:40.232637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.942 [2024-11-19 11:14:40.232644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.942 [2024-11-19 11:14:40.232650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.942 [2024-11-19 11:14:40.233891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:31.942 [2024-11-19 11:14:40.234053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:31.942 [2024-11-19 11:14:40.234328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.942 [2024-11-19 11:14:40.234328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 [2024-11-19 11:14:40.941910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 Malloc0 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 [2024-11-19 11:14:40.995868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.884 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.884 { 00:20:32.884 "params": { 00:20:32.884 "name": "Nvme$subsystem", 00:20:32.884 "trtype": "$TEST_TRANSPORT", 00:20:32.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.884 "adrfam": "ipv4", 00:20:32.884 "trsvcid": "$NVMF_PORT", 00:20:32.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.884 "hdgst": ${hdgst:-false}, 00:20:32.884 "ddgst": ${ddgst:-false} 00:20:32.884 }, 00:20:32.884 "method": "bdev_nvme_attach_controller" 00:20:32.884 } 00:20:32.884 EOF 00:20:32.884 )") 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:32.884 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:32.885 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:32.885 "params": { 00:20:32.885 "name": "Nvme1", 00:20:32.885 "trtype": "tcp", 00:20:32.885 "traddr": "10.0.0.2", 00:20:32.885 "adrfam": "ipv4", 00:20:32.885 "trsvcid": "4420", 00:20:32.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.885 "hdgst": false, 00:20:32.885 "ddgst": false 00:20:32.885 }, 00:20:32.885 "method": "bdev_nvme_attach_controller" 00:20:32.885 }' 00:20:32.885 [2024-11-19 11:14:41.057151] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:20:32.885 [2024-11-19 11:14:41.057238] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4142720 ] 00:20:32.885 [2024-11-19 11:14:41.149400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:32.885 [2024-11-19 11:14:41.204572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.885 [2024-11-19 11:14:41.204689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.885 [2024-11-19 11:14:41.204692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.145 I/O targets: 00:20:33.145 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:33.145 00:20:33.145 00:20:33.145 CUnit - A unit testing framework for C - Version 2.1-3 00:20:33.145 http://cunit.sourceforge.net/ 00:20:33.145 00:20:33.145 00:20:33.145 Suite: bdevio tests on: Nvme1n1 00:20:33.405 Test: blockdev write read block ...passed 00:20:33.405 Test: blockdev write zeroes read block ...passed 00:20:33.405 Test: blockdev write zeroes read no split ...passed 00:20:33.405 Test: blockdev write zeroes read split ...passed 00:20:33.405 Test: blockdev write zeroes read split partial ...passed 00:20:33.405 Test: blockdev reset ...[2024-11-19 11:14:41.584548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:33.405 [2024-11-19 11:14:41.584620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdebfb0 (9): Bad file descriptor 00:20:33.405 [2024-11-19 11:14:41.644986] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:33.405 passed 00:20:33.405 Test: blockdev write read 8 blocks ...passed 00:20:33.405 Test: blockdev write read size > 128k ...passed 00:20:33.405 Test: blockdev write read invalid size ...passed 00:20:33.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:33.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:33.665 Test: blockdev write read max offset ...passed 00:20:33.665 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:33.665 Test: blockdev writev readv 8 blocks ...passed 00:20:33.665 Test: blockdev writev readv 30 x 1block ...passed 00:20:33.665 Test: blockdev writev readv block ...passed 00:20:33.665 Test: blockdev writev readv size > 128k ...passed 00:20:33.665 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:33.665 Test: blockdev comparev and writev ...[2024-11-19 11:14:41.947865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.665 [2024-11-19 11:14:41.947890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.665 [2024-11-19 11:14:41.947901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.665 [2024-11-19 11:14:41.947907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:33.665 [2024-11-19 11:14:41.948278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.665 [2024-11-19 11:14:41.948285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:33.665 [2024-11-19 11:14:41.948295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.665 [2024-11-19 11:14:41.948301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:33.665 [2024-11-19 11:14:41.948672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.665 [2024-11-19 11:14:41.948679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:33.665 [2024-11-19 11:14:41.948688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.665 [2024-11-19 11:14:41.948694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:33.665 [2024-11-19 11:14:41.949051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.665 [2024-11-19 11:14:41.949059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:33.665 [2024-11-19 11:14:41.949068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:33.665 [2024-11-19 11:14:41.949074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:33.665 passed 00:20:33.925 Test: blockdev nvme passthru rw ...passed 00:20:33.925 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:14:42.032412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:33.925 [2024-11-19 11:14:42.032423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:33.925 [2024-11-19 11:14:42.032631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:33.925 [2024-11-19 11:14:42.032638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:33.925 [2024-11-19 11:14:42.032881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:33.925 [2024-11-19 11:14:42.032888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:33.925 [2024-11-19 11:14:42.033101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:33.925 [2024-11-19 11:14:42.033107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:33.925 passed 00:20:33.925 Test: blockdev nvme admin passthru ...passed 00:20:33.925 Test: blockdev copy ...passed 00:20:33.925 00:20:33.925 Run Summary: Type Total Ran Passed Failed Inactive 00:20:33.925 suites 1 1 n/a 0 0 00:20:33.925 tests 23 23 23 0 0 00:20:33.925 asserts 152 152 152 0 n/a 00:20:33.925 00:20:33.925 Elapsed time = 1.274 seconds 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.185 rmmod nvme_tcp 00:20:34.185 rmmod nvme_fabrics 00:20:34.185 rmmod nvme_keyring 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 4142381 ']' 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 4142381 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 4142381 ']' 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 4142381 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4142381 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4142381' 00:20:34.185 killing process with pid 4142381 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 4142381 00:20:34.185 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 4142381 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.446 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.991 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.991 00:20:36.991 real 0m13.443s 00:20:36.991 user 0m14.652s 00:20:36.991 sys 0m7.311s 00:20:36.991 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.991 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:36.991 ************************************ 00:20:36.991 END TEST nvmf_bdevio_no_huge 00:20:36.991 ************************************ 00:20:36.991 11:14:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:36.992 11:14:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:36.992 11:14:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.992 11:14:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.992 ************************************ 00:20:36.992 START TEST nvmf_tls 00:20:36.992 ************************************ 00:20:36.992 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:36.992 * Looking for test storage... 00:20:36.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:36.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.992 --rc genhtml_branch_coverage=1 00:20:36.992 --rc genhtml_function_coverage=1 00:20:36.992 --rc genhtml_legend=1 00:20:36.992 --rc geninfo_all_blocks=1 00:20:36.992 --rc geninfo_unexecuted_blocks=1 00:20:36.992 00:20:36.992 ' 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:36.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.992 --rc genhtml_branch_coverage=1 00:20:36.992 --rc genhtml_function_coverage=1 00:20:36.992 --rc genhtml_legend=1 00:20:36.992 --rc geninfo_all_blocks=1 00:20:36.992 --rc geninfo_unexecuted_blocks=1 00:20:36.992 00:20:36.992 ' 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:36.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.992 --rc genhtml_branch_coverage=1 00:20:36.992 --rc genhtml_function_coverage=1 00:20:36.992 --rc genhtml_legend=1 00:20:36.992 --rc geninfo_all_blocks=1 00:20:36.992 --rc geninfo_unexecuted_blocks=1 00:20:36.992 00:20:36.992 ' 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:36.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.992 --rc genhtml_branch_coverage=1 00:20:36.992 --rc genhtml_function_coverage=1 00:20:36.992 --rc genhtml_legend=1 00:20:36.992 --rc geninfo_all_blocks=1 00:20:36.992 --rc geninfo_unexecuted_blocks=1 00:20:36.992 00:20:36.992 ' 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:36.992 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.993 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.131 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:45.132 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:45.132 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:45.132 Found net devices under 0000:31:00.0: cvl_0_0 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:45.132 Found net devices under 0000:31:00.1: cvl_0_1 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.132 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:45.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:20:45.394 00:20:45.394 --- 10.0.0.2 ping statistics --- 00:20:45.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.394 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:20:45.394 00:20:45.394 --- 10.0.0.1 ping statistics --- 00:20:45.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.394 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.394 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4147760 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4147760 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4147760 ']' 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.395 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.395 [2024-11-19 11:14:53.735562] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:20:45.395 [2024-11-19 11:14:53.735628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.655 [2024-11-19 11:14:53.845649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.655 [2024-11-19 11:14:53.895257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.655 [2024-11-19 11:14:53.895312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.655 [2024-11-19 11:14:53.895320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.655 [2024-11-19 11:14:53.895328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.655 [2024-11-19 11:14:53.895334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.655 [2024-11-19 11:14:53.896143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.227 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.227 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:46.227 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.227 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:46.227 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.487 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.487 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:46.487 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:46.487 true 00:20:46.487 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:46.488 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:46.748 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:46.748 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:46.748 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:47.009 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:47.009 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:47.009 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:47.009 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:47.009 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:47.271 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:47.271 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:47.532 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:47.532 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:47.532 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:47.532 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:47.793 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:47.793 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:47.793 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:47.793 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:47.793 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:48.054 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:48.054 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:48.054 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:48.316 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.X0pbL80Bxj 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.RKADfgfsik 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.X0pbL80Bxj 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.RKADfgfsik 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:48.576 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:48.835 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.X0pbL80Bxj 00:20:48.835 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.X0pbL80Bxj 00:20:48.835 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.097 [2024-11-19 11:14:57.270258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.097 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:49.358 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:49.358 [2024-11-19 11:14:57.607079] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.358 [2024-11-19 11:14:57.607276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.358 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:49.619 malloc0 00:20:49.619 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:49.619 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.X0pbL80Bxj 00:20:49.880 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:50.141 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.X0pbL80Bxj 00:21:00.139 Initializing NVMe Controllers 00:21:00.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.139 Initialization complete. Launching workers. 00:21:00.139 ======================================================== 00:21:00.139 Latency(us) 00:21:00.139 Device Information : IOPS MiB/s Average min max 00:21:00.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18576.49 72.56 3445.21 1121.39 4161.69 00:21:00.140 ======================================================== 00:21:00.140 Total : 18576.49 72.56 3445.21 1121.39 4161.69 00:21:00.140 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X0pbL80Bxj 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.X0pbL80Bxj 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4150610 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4150610 /var/tmp/bdevperf.sock 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4150610 ']' 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.140 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.140 [2024-11-19 11:15:08.424273] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:00.140 [2024-11-19 11:15:08.424331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4150610 ] 00:21:00.140 [2024-11-19 11:15:08.488440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.400 [2024-11-19 11:15:08.517793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.400 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.400 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:00.400 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X0pbL80Bxj 00:21:00.660 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:00.660 [2024-11-19 11:15:08.911454] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.660 TLSTESTn1 00:21:00.920 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:00.920 Running I/O for 10 seconds... 00:21:02.801 5512.00 IOPS, 21.53 MiB/s [2024-11-19T10:15:12.536Z] 5556.00 IOPS, 21.70 MiB/s [2024-11-19T10:15:13.477Z] 5675.00 IOPS, 22.17 MiB/s [2024-11-19T10:15:14.420Z] 5885.25 IOPS, 22.99 MiB/s [2024-11-19T10:15:15.362Z] 5778.80 IOPS, 22.57 MiB/s [2024-11-19T10:15:16.307Z] 5846.50 IOPS, 22.84 MiB/s [2024-11-19T10:15:17.248Z] 5758.14 IOPS, 22.49 MiB/s [2024-11-19T10:15:18.191Z] 5857.38 IOPS, 22.88 MiB/s [2024-11-19T10:15:19.297Z] 5862.00 IOPS, 22.90 MiB/s [2024-11-19T10:15:19.297Z] 5749.00 IOPS, 22.46 MiB/s 00:21:10.945 Latency(us) 00:21:10.945 [2024-11-19T10:15:19.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.945 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:10.945 Verification LBA range: start 0x0 length 0x2000 00:21:10.945 TLSTESTn1 : 10.02 5752.15 22.47 0.00 0.00 22220.51 5324.80 24466.77 00:21:10.945 [2024-11-19T10:15:19.297Z] =================================================================================================================== 00:21:10.945 [2024-11-19T10:15:19.297Z] Total : 5752.15 22.47 0.00 0.00 22220.51 5324.80 24466.77 00:21:10.945 { 00:21:10.945 "results": [ 00:21:10.945 { 00:21:10.945 "job": "TLSTESTn1", 00:21:10.945 "core_mask": "0x4", 00:21:10.945 "workload": "verify", 00:21:10.945 "status": "finished", 00:21:10.945 "verify_range": { 00:21:10.945 "start": 0, 00:21:10.945 "length": 8192 00:21:10.945 }, 00:21:10.945 "queue_depth": 128, 00:21:10.945 "io_size": 4096, 00:21:10.945 "runtime": 10.016603, 00:21:10.945 "iops": 5752.149705843388, 00:21:10.945 "mibps": 22.469334788450734, 00:21:10.945 "io_failed": 0, 00:21:10.946 "io_timeout": 0, 00:21:10.946 "avg_latency_us": 22220.505568379704, 00:21:10.946 "min_latency_us": 5324.8, 00:21:10.946 "max_latency_us": 24466.773333333334 00:21:10.946 } 00:21:10.946 ], 00:21:10.946 "core_count": 1 00:21:10.946 } 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 4150610 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4150610 ']' 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4150610 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4150610 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4150610' 00:21:10.946 killing process with pid 4150610 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4150610 00:21:10.946 Received shutdown signal, test time was about 10.000000 seconds 00:21:10.946 00:21:10.946 Latency(us) 00:21:10.946 [2024-11-19T10:15:19.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.946 [2024-11-19T10:15:19.298Z] =================================================================================================================== 00:21:10.946 [2024-11-19T10:15:19.298Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.946 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4150610 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RKADfgfsik 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RKADfgfsik 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RKADfgfsik 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RKADfgfsik 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4153263 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4153263 /var/tmp/bdevperf.sock 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4153263 ']' 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.308 [2024-11-19 11:15:19.386702] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:11.308 [2024-11-19 11:15:19.386760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153263 ] 00:21:11.308 [2024-11-19 11:15:19.449480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.308 [2024-11-19 11:15:19.478036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.308 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RKADfgfsik 00:21:11.568 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:11.568 [2024-11-19 11:15:19.875375] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.568 [2024-11-19 11:15:19.885816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:11.568 [2024-11-19 11:15:19.886597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1114960 (107): Transport endpoint is not connected 00:21:11.568 [2024-11-19 11:15:19.887593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1114960 (9): Bad file descriptor 00:21:11.568 [2024-11-19 11:15:19.888595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:11.568 [2024-11-19 11:15:19.888602] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:11.568 [2024-11-19 11:15:19.888608] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:11.568 [2024-11-19 11:15:19.888615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:11.568 request: 00:21:11.568 { 00:21:11.568 "name": "TLSTEST", 00:21:11.568 "trtype": "tcp", 00:21:11.568 "traddr": "10.0.0.2", 00:21:11.568 "adrfam": "ipv4", 00:21:11.568 "trsvcid": "4420", 00:21:11.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.568 "prchk_reftag": false, 00:21:11.569 "prchk_guard": false, 00:21:11.569 "hdgst": false, 00:21:11.569 "ddgst": false, 00:21:11.569 "psk": "key0", 00:21:11.569 "allow_unrecognized_csi": false, 00:21:11.569 "method": "bdev_nvme_attach_controller", 00:21:11.569 "req_id": 1 00:21:11.569 } 00:21:11.569 Got JSON-RPC error response 00:21:11.569 response: 00:21:11.569 { 00:21:11.569 "code": -5, 00:21:11.569 "message": "Input/output error" 00:21:11.569 } 00:21:11.569 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4153263 00:21:11.569 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4153263 ']' 00:21:11.569 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4153263 00:21:11.569 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:11.569 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.569 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153263 00:21:11.829 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:11.829 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:11.829 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153263' 00:21:11.829 killing process with pid 4153263 00:21:11.829 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4153263 00:21:11.829 Received shutdown signal, test time was about 10.000000 seconds 00:21:11.829 00:21:11.829 Latency(us) 00:21:11.829 [2024-11-19T10:15:20.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.829 [2024-11-19T10:15:20.181Z] =================================================================================================================== 00:21:11.829 [2024-11-19T10:15:20.181Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:11.829 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4153263 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X0pbL80Bxj 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X0pbL80Bxj 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:11.829 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X0pbL80Bxj 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.X0pbL80Bxj 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4153405 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4153405 /var/tmp/bdevperf.sock 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4153405 ']' 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.830 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.830 [2024-11-19 11:15:20.118924] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:11.830 [2024-11-19 11:15:20.118983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153405 ] 00:21:12.090 [2024-11-19 11:15:20.182155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.090 [2024-11-19 11:15:20.211224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.090 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.090 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:12.090 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X0pbL80Bxj 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:12.351 [2024-11-19 11:15:20.588574] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.351 [2024-11-19 11:15:20.597454] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:12.351 [2024-11-19 11:15:20.597472] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:12.351 [2024-11-19 11:15:20.597490] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:12.351 [2024-11-19 11:15:20.597579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00960 (107): Transport endpoint is not connected 00:21:12.351 [2024-11-19 11:15:20.598567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00960 (9): Bad file descriptor 00:21:12.351 [2024-11-19 11:15:20.599568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:12.351 [2024-11-19 11:15:20.599576] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:12.351 [2024-11-19 11:15:20.599581] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:12.351 [2024-11-19 11:15:20.599589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:12.351 request: 00:21:12.351 { 00:21:12.351 "name": "TLSTEST", 00:21:12.351 "trtype": "tcp", 00:21:12.351 "traddr": "10.0.0.2", 00:21:12.351 "adrfam": "ipv4", 00:21:12.351 "trsvcid": "4420", 00:21:12.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.351 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:12.351 "prchk_reftag": false, 00:21:12.351 "prchk_guard": false, 00:21:12.351 "hdgst": false, 00:21:12.351 "ddgst": false, 00:21:12.351 "psk": "key0", 00:21:12.351 "allow_unrecognized_csi": false, 00:21:12.351 "method": "bdev_nvme_attach_controller", 00:21:12.351 "req_id": 1 00:21:12.351 } 00:21:12.351 Got JSON-RPC error response 00:21:12.351 response: 00:21:12.351 { 00:21:12.351 "code": -5, 00:21:12.351 "message": "Input/output error" 00:21:12.351 } 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4153405 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4153405 ']' 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4153405 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153405 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153405' 00:21:12.351 killing process with pid 4153405 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4153405 00:21:12.351 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.351 00:21:12.351 Latency(us) 00:21:12.351 [2024-11-19T10:15:20.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.351 [2024-11-19T10:15:20.703Z] =================================================================================================================== 00:21:12.351 [2024-11-19T10:15:20.703Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:12.351 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4153405 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X0pbL80Bxj 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X0pbL80Bxj 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X0pbL80Bxj 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.X0pbL80Bxj 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4153483 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4153483 /var/tmp/bdevperf.sock 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4153483 ']' 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.617 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.617 [2024-11-19 11:15:20.829597] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:12.617 [2024-11-19 11:15:20.829655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153483 ] 00:21:12.617 [2024-11-19 11:15:20.892802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.617 [2024-11-19 11:15:20.921432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.879 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.879 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:12.879 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X0pbL80Bxj 00:21:12.879 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:13.140 [2024-11-19 11:15:21.318804] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.140 [2024-11-19 11:15:21.329214] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:13.140 [2024-11-19 11:15:21.329232] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:13.140 [2024-11-19 11:15:21.329249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:13.140 [2024-11-19 11:15:21.329968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b37960 (107): Transport endpoint is not connected 00:21:13.140 [2024-11-19 11:15:21.330964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b37960 (9): Bad file descriptor 00:21:13.140 [2024-11-19 11:15:21.331966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:13.140 [2024-11-19 11:15:21.331973] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:13.140 [2024-11-19 11:15:21.331978] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:13.140 [2024-11-19 11:15:21.331986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:13.140 request: 00:21:13.140 { 00:21:13.140 "name": "TLSTEST", 00:21:13.140 "trtype": "tcp", 00:21:13.140 "traddr": "10.0.0.2", 00:21:13.140 "adrfam": "ipv4", 00:21:13.140 "trsvcid": "4420", 00:21:13.140 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:13.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.140 "prchk_reftag": false, 00:21:13.140 "prchk_guard": false, 00:21:13.140 "hdgst": false, 00:21:13.140 "ddgst": false, 00:21:13.140 "psk": "key0", 00:21:13.140 "allow_unrecognized_csi": false, 00:21:13.140 "method": "bdev_nvme_attach_controller", 00:21:13.141 "req_id": 1 00:21:13.141 } 00:21:13.141 Got JSON-RPC error response 00:21:13.141 response: 00:21:13.141 { 00:21:13.141 "code": -5, 00:21:13.141 "message": "Input/output error" 00:21:13.141 } 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4153483 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4153483 ']' 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4153483 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153483 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153483' 00:21:13.141 killing process with pid 4153483 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4153483 00:21:13.141 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.141 00:21:13.141 Latency(us) 00:21:13.141 [2024-11-19T10:15:21.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.141 [2024-11-19T10:15:21.493Z] =================================================================================================================== 00:21:13.141 [2024-11-19T10:15:21.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.141 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4153483 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4153753 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4153753 /var/tmp/bdevperf.sock 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4153753 ']' 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.402 [2024-11-19 11:15:21.571995] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:13.402 [2024-11-19 11:15:21.572048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153753 ] 00:21:13.402 [2024-11-19 11:15:21.636987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.402 [2024-11-19 11:15:21.664401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.402 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:13.664 [2024-11-19 11:15:21.901275] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:13.664 [2024-11-19 11:15:21.901301] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:13.664 request: 00:21:13.664 { 00:21:13.664 "name": "key0", 00:21:13.664 "path": "", 00:21:13.664 "method": "keyring_file_add_key", 00:21:13.664 "req_id": 1 00:21:13.664 } 00:21:13.664 Got JSON-RPC error response 00:21:13.664 response: 00:21:13.664 { 00:21:13.664 "code": -1, 00:21:13.664 "message": "Operation not permitted" 00:21:13.664 } 00:21:13.664 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:13.925 [2024-11-19 11:15:22.077794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.925 [2024-11-19 11:15:22.077819] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:13.925 request: 00:21:13.925 { 00:21:13.925 "name": "TLSTEST", 00:21:13.925 "trtype": "tcp", 00:21:13.925 "traddr": "10.0.0.2", 00:21:13.925 "adrfam": "ipv4", 00:21:13.925 "trsvcid": "4420", 00:21:13.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.925 "prchk_reftag": false, 00:21:13.925 "prchk_guard": false, 00:21:13.925 "hdgst": false, 00:21:13.925 "ddgst": false, 00:21:13.925 "psk": "key0", 00:21:13.925 "allow_unrecognized_csi": false, 00:21:13.925 "method": "bdev_nvme_attach_controller", 00:21:13.925 "req_id": 1 00:21:13.925 } 00:21:13.925 Got JSON-RPC error response 00:21:13.925 response: 00:21:13.925 { 00:21:13.925 "code": -126, 00:21:13.925 "message": "Required key not available" 00:21:13.926 } 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4153753 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4153753 ']' 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4153753 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153753 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153753' 00:21:13.926 killing process with pid 4153753 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4153753 00:21:13.926 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.926 00:21:13.926 Latency(us) 00:21:13.926 [2024-11-19T10:15:22.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.926 [2024-11-19T10:15:22.278Z] =================================================================================================================== 00:21:13.926 [2024-11-19T10:15:22.278Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4153753 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 4147760 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4147760 ']' 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4147760 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.926 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4147760 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4147760' 00:21:14.187 killing process with pid 4147760 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4147760 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4147760 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Kfv4rzTWpm 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Kfv4rzTWpm 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4153824 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4153824 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4153824 ']' 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.187 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.187 [2024-11-19 11:15:22.536722] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:14.187 [2024-11-19 11:15:22.536813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.448 [2024-11-19 11:15:22.637779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.448 [2024-11-19 11:15:22.669756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.448 [2024-11-19 11:15:22.669787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.448 [2024-11-19 11:15:22.669793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.448 [2024-11-19 11:15:22.669798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.448 [2024-11-19 11:15:22.669803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.448 [2024-11-19 11:15:22.670309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Kfv4rzTWpm 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Kfv4rzTWpm 00:21:15.019 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.281 [2024-11-19 11:15:23.515562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.281 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:15.542 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:15.542 [2024-11-19 11:15:23.840359] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.542 [2024-11-19 11:15:23.840545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.542 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:15.804 malloc0 00:21:15.804 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:16.065 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:16.065 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:16.326 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Kfv4rzTWpm 00:21:16.326 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.326 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.326 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.326 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Kfv4rzTWpm 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4154335 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4154335 /var/tmp/bdevperf.sock 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4154335 ']' 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.327 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.327 [2024-11-19 11:15:24.564564] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:16.327 [2024-11-19 11:15:24.564617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154335 ] 00:21:16.327 [2024-11-19 11:15:24.629175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.327 [2024-11-19 11:15:24.658533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.588 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.588 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.588 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:16.588 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:16.850 [2024-11-19 11:15:25.068139] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.850 TLSTESTn1 00:21:16.850 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:17.111 Running I/O for 10 seconds... 00:21:18.995 5544.00 IOPS, 21.66 MiB/s [2024-11-19T10:15:28.290Z] 5441.00 IOPS, 21.25 MiB/s [2024-11-19T10:15:29.675Z] 5796.33 IOPS, 22.64 MiB/s [2024-11-19T10:15:30.617Z] 5969.50 IOPS, 23.32 MiB/s [2024-11-19T10:15:31.559Z] 5953.60 IOPS, 23.26 MiB/s [2024-11-19T10:15:32.501Z] 5920.83 IOPS, 23.13 MiB/s [2024-11-19T10:15:33.444Z] 5866.71 IOPS, 22.92 MiB/s [2024-11-19T10:15:34.387Z] 5955.00 IOPS, 23.26 MiB/s [2024-11-19T10:15:35.329Z] 5746.11 IOPS, 22.45 MiB/s [2024-11-19T10:15:35.329Z] 5699.60 IOPS, 22.26 MiB/s 00:21:26.977 Latency(us) 00:21:26.977 [2024-11-19T10:15:35.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.977 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:26.977 Verification LBA range: start 0x0 length 0x2000 00:21:26.977 TLSTESTn1 : 10.02 5699.56 22.26 0.00 0.00 22420.30 4860.59 24576.00 00:21:26.977 [2024-11-19T10:15:35.329Z] =================================================================================================================== 00:21:26.977 [2024-11-19T10:15:35.329Z] Total : 5699.56 22.26 0.00 0.00 22420.30 4860.59 24576.00 00:21:26.977 { 00:21:26.977 "results": [ 00:21:26.977 { 00:21:26.977 "job": "TLSTESTn1", 00:21:26.977 "core_mask": "0x4", 00:21:26.977 "workload": "verify", 00:21:26.977 "status": "finished", 00:21:26.977 "verify_range": { 00:21:26.977 "start": 0, 00:21:26.977 "length": 8192 00:21:26.977 }, 00:21:26.977 "queue_depth": 128, 00:21:26.977 "io_size": 4096, 00:21:26.977 "runtime": 10.022349, 00:21:26.977 "iops": 5699.562048777188, 00:21:26.977 "mibps": 22.26391425303589, 00:21:26.977 "io_failed": 0, 00:21:26.977 "io_timeout": 0, 00:21:26.977 "avg_latency_us": 22420.30256020634, 00:21:26.977 "min_latency_us": 4860.586666666667, 00:21:26.977 "max_latency_us": 24576.0 00:21:26.977 } 00:21:26.977 ], 00:21:26.977 "core_count": 1 00:21:26.977 } 00:21:26.977 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.977 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 4154335 00:21:26.977 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4154335 ']' 00:21:26.977 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4154335 00:21:26.977 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:26.977 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.238 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4154335 00:21:27.238 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:27.238 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:27.238 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4154335' 00:21:27.238 killing process with pid 4154335 00:21:27.238 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4154335 00:21:27.238 Received shutdown signal, test time was about 10.000000 seconds 00:21:27.238 00:21:27.238 Latency(us) 00:21:27.238 [2024-11-19T10:15:35.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.238 [2024-11-19T10:15:35.590Z] =================================================================================================================== 00:21:27.238 [2024-11-19T10:15:35.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.238 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4154335 00:21:27.238 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Kfv4rzTWpm 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Kfv4rzTWpm 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Kfv4rzTWpm 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Kfv4rzTWpm 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Kfv4rzTWpm 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4156482 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4156482 /var/tmp/bdevperf.sock 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4156482 ']' 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.239 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.239 [2024-11-19 11:15:35.558512] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:27.239 [2024-11-19 11:15:35.558566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156482 ] 00:21:27.500 [2024-11-19 11:15:35.622939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.500 [2024-11-19 11:15:35.651105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.500 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.500 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:27.500 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:27.762 [2024-11-19 11:15:35.888052] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Kfv4rzTWpm': 0100666 00:21:27.762 [2024-11-19 11:15:35.888076] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:27.762 request: 00:21:27.762 { 00:21:27.762 "name": "key0", 00:21:27.762 "path": "/tmp/tmp.Kfv4rzTWpm", 00:21:27.762 "method": "keyring_file_add_key", 00:21:27.762 "req_id": 1 00:21:27.762 } 00:21:27.762 Got JSON-RPC error response 00:21:27.762 response: 00:21:27.762 { 00:21:27.762 "code": -1, 00:21:27.762 "message": "Operation not permitted" 00:21:27.762 } 00:21:27.762 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:27.762 [2024-11-19 11:15:36.068582] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.762 [2024-11-19 11:15:36.068604] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:27.762 request: 00:21:27.762 { 00:21:27.762 "name": "TLSTEST", 00:21:27.762 "trtype": "tcp", 00:21:27.762 "traddr": "10.0.0.2", 00:21:27.762 "adrfam": "ipv4", 00:21:27.762 "trsvcid": "4420", 00:21:27.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.762 "prchk_reftag": false, 00:21:27.762 "prchk_guard": false, 00:21:27.762 "hdgst": false, 00:21:27.762 "ddgst": false, 00:21:27.762 "psk": "key0", 00:21:27.762 "allow_unrecognized_csi": false, 00:21:27.762 "method": "bdev_nvme_attach_controller", 00:21:27.762 "req_id": 1 00:21:27.762 } 00:21:27.762 Got JSON-RPC error response 00:21:27.762 response: 00:21:27.762 { 00:21:27.762 "code": -126, 00:21:27.762 "message": "Required key not available" 00:21:27.762 } 00:21:27.762 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4156482 00:21:27.762 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4156482 ']' 00:21:27.762 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4156482 00:21:27.762 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:27.762 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.762 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4156482 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4156482' 00:21:28.024 killing process with pid 4156482 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4156482 00:21:28.024 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.024 00:21:28.024 Latency(us) 00:21:28.024 [2024-11-19T10:15:36.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.024 [2024-11-19T10:15:36.376Z] =================================================================================================================== 00:21:28.024 [2024-11-19T10:15:36.376Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4156482 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 4153824 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4153824 ']' 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4153824 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153824 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153824' 00:21:28.024 killing process with pid 4153824 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4153824 00:21:28.024 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4153824 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4156550 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4156550 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4156550 ']' 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.285 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.285 [2024-11-19 11:15:36.500139] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:28.285 [2024-11-19 11:15:36.500210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.285 [2024-11-19 11:15:36.596849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.285 [2024-11-19 11:15:36.625167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.285 [2024-11-19 11:15:36.625194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.285 [2024-11-19 11:15:36.625200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.285 [2024-11-19 11:15:36.625204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.285 [2024-11-19 11:15:36.625209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.285 [2024-11-19 11:15:36.625657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Kfv4rzTWpm 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Kfv4rzTWpm 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Kfv4rzTWpm 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Kfv4rzTWpm 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:29.227 [2024-11-19 11:15:37.481162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.227 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:29.487 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:29.748 [2024-11-19 11:15:37.862098] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.748 [2024-11-19 11:15:37.862288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.748 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:29.748 malloc0 00:21:29.748 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:30.009 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:30.269 [2024-11-19 11:15:38.373006] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Kfv4rzTWpm': 0100666 00:21:30.269 [2024-11-19 11:15:38.373028] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:30.269 request: 00:21:30.269 { 00:21:30.269 "name": "key0", 00:21:30.269 "path": "/tmp/tmp.Kfv4rzTWpm", 00:21:30.269 "method": "keyring_file_add_key", 00:21:30.269 "req_id": 1 00:21:30.269 } 00:21:30.269 Got JSON-RPC error response 00:21:30.269 response: 00:21:30.269 { 00:21:30.269 "code": -1, 00:21:30.269 "message": "Operation not permitted" 00:21:30.269 } 00:21:30.269 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:30.269 [2024-11-19 11:15:38.541445] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:30.269 [2024-11-19 11:15:38.541471] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:30.269 request: 00:21:30.269 { 00:21:30.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.270 "host": "nqn.2016-06.io.spdk:host1", 00:21:30.270 "psk": "key0", 00:21:30.270 "method": "nvmf_subsystem_add_host", 00:21:30.270 "req_id": 1 00:21:30.270 } 00:21:30.270 Got JSON-RPC error response 00:21:30.270 response: 00:21:30.270 { 00:21:30.270 "code": -32603, 00:21:30.270 "message": "Internal error" 00:21:30.270 } 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 4156550 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4156550 ']' 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4156550 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.270 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4156550 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4156550' 00:21:30.530 killing process with pid 4156550 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4156550 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4156550 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Kfv4rzTWpm 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4157202 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4157202 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4157202 ']' 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.530 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.530 [2024-11-19 11:15:38.809243] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:30.530 [2024-11-19 11:15:38.809295] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.790 [2024-11-19 11:15:38.906132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.790 [2024-11-19 11:15:38.935017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.790 [2024-11-19 11:15:38.935047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.790 [2024-11-19 11:15:38.935053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.790 [2024-11-19 11:15:38.935057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.790 [2024-11-19 11:15:38.935061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.790 [2024-11-19 11:15:38.935541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Kfv4rzTWpm 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Kfv4rzTWpm 00:21:31.376 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:31.637 [2024-11-19 11:15:39.774965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.637 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:31.637 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:31.898 [2024-11-19 11:15:40.099767] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:31.898 [2024-11-19 11:15:40.099982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.898 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:32.159 malloc0 00:21:32.159 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:32.159 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=4157569 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 4157569 /var/tmp/bdevperf.sock 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4157569 ']' 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.422 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.683 [2024-11-19 11:15:40.776334] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:32.683 [2024-11-19 11:15:40.776388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157569 ] 00:21:32.683 [2024-11-19 11:15:40.840949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.683 [2024-11-19 11:15:40.869661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.683 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.683 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.683 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:32.943 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.943 [2024-11-19 11:15:41.287012] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.205 TLSTESTn1 00:21:33.205 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:33.467 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:33.467 "subsystems": [ 00:21:33.467 { 00:21:33.467 "subsystem": "keyring", 00:21:33.467 "config": [ 00:21:33.467 { 00:21:33.467 "method": "keyring_file_add_key", 00:21:33.467 "params": { 00:21:33.467 "name": "key0", 00:21:33.467 "path": "/tmp/tmp.Kfv4rzTWpm" 00:21:33.467 } 00:21:33.467 } 00:21:33.467 ] 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "subsystem": "iobuf", 00:21:33.467 "config": [ 00:21:33.467 { 00:21:33.467 "method": "iobuf_set_options", 00:21:33.467 "params": { 00:21:33.467 "small_pool_count": 8192, 00:21:33.467 "large_pool_count": 1024, 00:21:33.467 "small_bufsize": 8192, 00:21:33.467 "large_bufsize": 135168, 00:21:33.467 "enable_numa": false 00:21:33.467 } 00:21:33.467 } 00:21:33.467 ] 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "subsystem": "sock", 00:21:33.467 "config": [ 00:21:33.467 { 00:21:33.467 "method": "sock_set_default_impl", 00:21:33.467 "params": { 00:21:33.467 "impl_name": "posix" 00:21:33.467 } 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "method": "sock_impl_set_options", 00:21:33.467 "params": { 00:21:33.467 "impl_name": "ssl", 00:21:33.467 "recv_buf_size": 4096, 00:21:33.467 "send_buf_size": 4096, 00:21:33.467 "enable_recv_pipe": true, 00:21:33.467 "enable_quickack": false, 00:21:33.467 "enable_placement_id": 0, 00:21:33.467 "enable_zerocopy_send_server": true, 00:21:33.467 "enable_zerocopy_send_client": false, 00:21:33.467 "zerocopy_threshold": 0, 00:21:33.467 "tls_version": 0, 00:21:33.467 "enable_ktls": false 00:21:33.467 } 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "method": "sock_impl_set_options", 00:21:33.467 "params": { 00:21:33.467 "impl_name": "posix", 00:21:33.467 "recv_buf_size": 2097152, 00:21:33.467 "send_buf_size": 2097152, 00:21:33.467 "enable_recv_pipe": true, 00:21:33.467 "enable_quickack": false, 00:21:33.467 "enable_placement_id": 0, 00:21:33.467 "enable_zerocopy_send_server": true, 00:21:33.467 "enable_zerocopy_send_client": false, 00:21:33.467 "zerocopy_threshold": 0, 00:21:33.467 "tls_version": 0, 00:21:33.467 "enable_ktls": false 00:21:33.467 } 00:21:33.467 } 00:21:33.467 ] 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "subsystem": "vmd", 00:21:33.467 "config": [] 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "subsystem": "accel", 00:21:33.467 "config": [ 00:21:33.467 { 00:21:33.467 "method": "accel_set_options", 00:21:33.467 "params": { 00:21:33.467 "small_cache_size": 128, 00:21:33.467 "large_cache_size": 16, 00:21:33.467 "task_count": 2048, 00:21:33.467 "sequence_count": 2048, 00:21:33.467 "buf_count": 2048 00:21:33.467 } 00:21:33.467 } 00:21:33.467 ] 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "subsystem": "bdev", 00:21:33.467 "config": [ 00:21:33.467 { 00:21:33.467 "method": "bdev_set_options", 00:21:33.467 "params": { 00:21:33.467 "bdev_io_pool_size": 65535, 00:21:33.467 "bdev_io_cache_size": 256, 00:21:33.467 "bdev_auto_examine": true, 00:21:33.467 "iobuf_small_cache_size": 128, 00:21:33.467 "iobuf_large_cache_size": 16 00:21:33.467 } 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "method": "bdev_raid_set_options", 00:21:33.467 "params": { 00:21:33.467 "process_window_size_kb": 1024, 00:21:33.467 "process_max_bandwidth_mb_sec": 0 00:21:33.467 } 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "method": "bdev_iscsi_set_options", 00:21:33.467 "params": { 00:21:33.467 "timeout_sec": 30 00:21:33.467 } 00:21:33.467 }, 00:21:33.467 { 00:21:33.467 "method": "bdev_nvme_set_options", 00:21:33.467 "params": { 00:21:33.467 "action_on_timeout": "none", 00:21:33.467 "timeout_us": 0, 00:21:33.467 "timeout_admin_us": 0, 00:21:33.467 "keep_alive_timeout_ms": 10000, 00:21:33.467 "arbitration_burst": 0, 00:21:33.467 "low_priority_weight": 0, 00:21:33.467 "medium_priority_weight": 0, 00:21:33.467 "high_priority_weight": 0, 00:21:33.467 "nvme_adminq_poll_period_us": 10000, 00:21:33.467 "nvme_ioq_poll_period_us": 0, 00:21:33.467 "io_queue_requests": 0, 00:21:33.467 "delay_cmd_submit": true, 00:21:33.467 "transport_retry_count": 4, 00:21:33.467 "bdev_retry_count": 3, 00:21:33.467 "transport_ack_timeout": 0, 00:21:33.467 "ctrlr_loss_timeout_sec": 0, 00:21:33.467 "reconnect_delay_sec": 0, 00:21:33.467 "fast_io_fail_timeout_sec": 0, 00:21:33.467 "disable_auto_failback": false, 00:21:33.467 "generate_uuids": false, 00:21:33.467 "transport_tos": 0, 00:21:33.467 "nvme_error_stat": false, 00:21:33.467 "rdma_srq_size": 0, 00:21:33.467 "io_path_stat": false, 00:21:33.468 "allow_accel_sequence": false, 00:21:33.468 "rdma_max_cq_size": 0, 00:21:33.468 "rdma_cm_event_timeout_ms": 0, 00:21:33.468 "dhchap_digests": [ 00:21:33.468 "sha256", 00:21:33.468 "sha384", 00:21:33.468 "sha512" 00:21:33.468 ], 00:21:33.468 "dhchap_dhgroups": [ 00:21:33.468 "null", 00:21:33.468 "ffdhe2048", 00:21:33.468 "ffdhe3072", 00:21:33.468 "ffdhe4096", 00:21:33.468 "ffdhe6144", 00:21:33.468 "ffdhe8192" 00:21:33.468 ] 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "bdev_nvme_set_hotplug", 00:21:33.468 "params": { 00:21:33.468 "period_us": 100000, 00:21:33.468 "enable": false 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "bdev_malloc_create", 00:21:33.468 "params": { 00:21:33.468 "name": "malloc0", 00:21:33.468 "num_blocks": 8192, 00:21:33.468 "block_size": 4096, 00:21:33.468 "physical_block_size": 4096, 00:21:33.468 "uuid": "66eaf499-5175-4740-9a06-3aa5ca064fe7", 00:21:33.468 "optimal_io_boundary": 0, 00:21:33.468 "md_size": 0, 00:21:33.468 "dif_type": 0, 00:21:33.468 "dif_is_head_of_md": false, 00:21:33.468 "dif_pi_format": 0 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "bdev_wait_for_examine" 00:21:33.468 } 00:21:33.468 ] 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "subsystem": "nbd", 00:21:33.468 "config": [] 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "subsystem": "scheduler", 00:21:33.468 "config": [ 00:21:33.468 { 00:21:33.468 "method": "framework_set_scheduler", 00:21:33.468 "params": { 00:21:33.468 "name": "static" 00:21:33.468 } 00:21:33.468 } 00:21:33.468 ] 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "subsystem": "nvmf", 00:21:33.468 "config": [ 00:21:33.468 { 00:21:33.468 "method": "nvmf_set_config", 00:21:33.468 "params": { 00:21:33.468 "discovery_filter": "match_any", 00:21:33.468 "admin_cmd_passthru": { 00:21:33.468 "identify_ctrlr": false 00:21:33.468 }, 00:21:33.468 "dhchap_digests": [ 00:21:33.468 "sha256", 00:21:33.468 "sha384", 00:21:33.468 "sha512" 00:21:33.468 ], 00:21:33.468 "dhchap_dhgroups": [ 00:21:33.468 "null", 00:21:33.468 "ffdhe2048", 00:21:33.468 "ffdhe3072", 00:21:33.468 "ffdhe4096", 00:21:33.468 "ffdhe6144", 00:21:33.468 "ffdhe8192" 00:21:33.468 ] 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "nvmf_set_max_subsystems", 00:21:33.468 "params": { 00:21:33.468 "max_subsystems": 1024 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "nvmf_set_crdt", 00:21:33.468 "params": { 00:21:33.468 "crdt1": 0, 00:21:33.468 "crdt2": 0, 00:21:33.468 "crdt3": 0 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "nvmf_create_transport", 00:21:33.468 "params": { 00:21:33.468 "trtype": "TCP", 00:21:33.468 "max_queue_depth": 128, 00:21:33.468 "max_io_qpairs_per_ctrlr": 127, 00:21:33.468 "in_capsule_data_size": 4096, 00:21:33.468 "max_io_size": 131072, 00:21:33.468 "io_unit_size": 131072, 00:21:33.468 "max_aq_depth": 128, 00:21:33.468 "num_shared_buffers": 511, 00:21:33.468 "buf_cache_size": 4294967295, 00:21:33.468 "dif_insert_or_strip": false, 00:21:33.468 "zcopy": false, 00:21:33.468 "c2h_success": false, 00:21:33.468 "sock_priority": 0, 00:21:33.468 "abort_timeout_sec": 1, 00:21:33.468 "ack_timeout": 0, 00:21:33.468 "data_wr_pool_size": 0 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "nvmf_create_subsystem", 00:21:33.468 "params": { 00:21:33.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.468 "allow_any_host": false, 00:21:33.468 "serial_number": "SPDK00000000000001", 00:21:33.468 "model_number": "SPDK bdev Controller", 00:21:33.468 "max_namespaces": 10, 00:21:33.468 "min_cntlid": 1, 00:21:33.468 "max_cntlid": 65519, 00:21:33.468 "ana_reporting": false 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "nvmf_subsystem_add_host", 00:21:33.468 "params": { 00:21:33.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.468 "host": "nqn.2016-06.io.spdk:host1", 00:21:33.468 "psk": "key0" 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "nvmf_subsystem_add_ns", 00:21:33.468 "params": { 00:21:33.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.468 "namespace": { 00:21:33.468 "nsid": 1, 00:21:33.468 "bdev_name": "malloc0", 00:21:33.468 "nguid": "66EAF499517547409A063AA5CA064FE7", 00:21:33.468 "uuid": "66eaf499-5175-4740-9a06-3aa5ca064fe7", 00:21:33.468 "no_auto_visible": false 00:21:33.468 } 00:21:33.468 } 00:21:33.468 }, 00:21:33.468 { 00:21:33.468 "method": "nvmf_subsystem_add_listener", 00:21:33.468 "params": { 00:21:33.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.468 "listen_address": { 00:21:33.468 "trtype": "TCP", 00:21:33.468 "adrfam": "IPv4", 00:21:33.468 "traddr": "10.0.0.2", 00:21:33.468 "trsvcid": "4420" 00:21:33.468 }, 00:21:33.468 "secure_channel": true 00:21:33.468 } 00:21:33.468 } 00:21:33.468 ] 00:21:33.468 } 00:21:33.468 ] 00:21:33.468 }' 00:21:33.468 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:33.730 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:33.730 "subsystems": [ 00:21:33.730 { 00:21:33.730 "subsystem": "keyring", 00:21:33.730 "config": [ 00:21:33.730 { 00:21:33.730 "method": "keyring_file_add_key", 00:21:33.730 "params": { 00:21:33.730 "name": "key0", 00:21:33.730 "path": "/tmp/tmp.Kfv4rzTWpm" 00:21:33.730 } 00:21:33.730 } 00:21:33.730 ] 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "subsystem": "iobuf", 00:21:33.730 "config": [ 00:21:33.730 { 00:21:33.730 "method": "iobuf_set_options", 00:21:33.730 "params": { 00:21:33.730 "small_pool_count": 8192, 00:21:33.730 "large_pool_count": 1024, 00:21:33.730 "small_bufsize": 8192, 00:21:33.730 "large_bufsize": 135168, 00:21:33.730 "enable_numa": false 00:21:33.730 } 00:21:33.730 } 00:21:33.730 ] 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "subsystem": "sock", 00:21:33.730 "config": [ 00:21:33.730 { 00:21:33.730 "method": "sock_set_default_impl", 00:21:33.730 "params": { 00:21:33.730 "impl_name": "posix" 00:21:33.730 } 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "method": "sock_impl_set_options", 00:21:33.730 "params": { 00:21:33.730 "impl_name": "ssl", 00:21:33.730 "recv_buf_size": 4096, 00:21:33.730 "send_buf_size": 4096, 00:21:33.730 "enable_recv_pipe": true, 00:21:33.730 "enable_quickack": false, 00:21:33.730 "enable_placement_id": 0, 00:21:33.730 "enable_zerocopy_send_server": true, 00:21:33.730 "enable_zerocopy_send_client": false, 00:21:33.730 "zerocopy_threshold": 0, 00:21:33.730 "tls_version": 0, 00:21:33.730 "enable_ktls": false 00:21:33.730 } 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "method": "sock_impl_set_options", 00:21:33.730 "params": { 00:21:33.730 "impl_name": "posix", 00:21:33.730 "recv_buf_size": 2097152, 00:21:33.730 "send_buf_size": 2097152, 00:21:33.730 "enable_recv_pipe": true, 00:21:33.730 "enable_quickack": false, 00:21:33.730 "enable_placement_id": 0, 00:21:33.730 "enable_zerocopy_send_server": true, 00:21:33.730 "enable_zerocopy_send_client": false, 00:21:33.730 "zerocopy_threshold": 0, 00:21:33.730 "tls_version": 0, 00:21:33.730 "enable_ktls": false 00:21:33.730 } 00:21:33.730 } 00:21:33.730 ] 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "subsystem": "vmd", 00:21:33.730 "config": [] 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "subsystem": "accel", 00:21:33.730 "config": [ 00:21:33.730 { 00:21:33.730 "method": "accel_set_options", 00:21:33.730 "params": { 00:21:33.730 "small_cache_size": 128, 00:21:33.730 "large_cache_size": 16, 00:21:33.730 "task_count": 2048, 00:21:33.730 "sequence_count": 2048, 00:21:33.730 "buf_count": 2048 00:21:33.730 } 00:21:33.730 } 00:21:33.730 ] 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "subsystem": "bdev", 00:21:33.730 "config": [ 00:21:33.730 { 00:21:33.730 "method": "bdev_set_options", 00:21:33.730 "params": { 00:21:33.730 "bdev_io_pool_size": 65535, 00:21:33.730 "bdev_io_cache_size": 256, 00:21:33.730 "bdev_auto_examine": true, 00:21:33.730 "iobuf_small_cache_size": 128, 00:21:33.730 "iobuf_large_cache_size": 16 00:21:33.730 } 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "method": "bdev_raid_set_options", 00:21:33.730 "params": { 00:21:33.730 "process_window_size_kb": 1024, 00:21:33.730 "process_max_bandwidth_mb_sec": 0 00:21:33.730 } 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "method": "bdev_iscsi_set_options", 00:21:33.730 "params": { 00:21:33.730 "timeout_sec": 30 00:21:33.730 } 00:21:33.730 }, 00:21:33.730 { 00:21:33.730 "method": "bdev_nvme_set_options", 00:21:33.730 "params": { 00:21:33.730 "action_on_timeout": "none", 00:21:33.730 "timeout_us": 0, 00:21:33.730 "timeout_admin_us": 0, 00:21:33.730 "keep_alive_timeout_ms": 10000, 00:21:33.730 "arbitration_burst": 0, 00:21:33.730 "low_priority_weight": 0, 00:21:33.730 "medium_priority_weight": 0, 00:21:33.730 "high_priority_weight": 0, 00:21:33.730 "nvme_adminq_poll_period_us": 10000, 00:21:33.730 "nvme_ioq_poll_period_us": 0, 00:21:33.730 "io_queue_requests": 512, 00:21:33.730 "delay_cmd_submit": true, 00:21:33.730 "transport_retry_count": 4, 00:21:33.730 "bdev_retry_count": 3, 00:21:33.730 "transport_ack_timeout": 0, 00:21:33.730 "ctrlr_loss_timeout_sec": 0, 00:21:33.730 "reconnect_delay_sec": 0, 00:21:33.730 "fast_io_fail_timeout_sec": 0, 00:21:33.730 "disable_auto_failback": false, 00:21:33.730 "generate_uuids": false, 00:21:33.730 "transport_tos": 0, 00:21:33.730 "nvme_error_stat": false, 00:21:33.730 "rdma_srq_size": 0, 00:21:33.730 "io_path_stat": false, 00:21:33.730 "allow_accel_sequence": false, 00:21:33.730 "rdma_max_cq_size": 0, 00:21:33.730 "rdma_cm_event_timeout_ms": 0, 00:21:33.730 "dhchap_digests": [ 00:21:33.730 "sha256", 00:21:33.730 "sha384", 00:21:33.730 "sha512" 00:21:33.730 ], 00:21:33.730 "dhchap_dhgroups": [ 00:21:33.730 "null", 00:21:33.730 "ffdhe2048", 00:21:33.730 "ffdhe3072", 00:21:33.730 "ffdhe4096", 00:21:33.730 "ffdhe6144", 00:21:33.730 "ffdhe8192" 00:21:33.730 ] 00:21:33.731 } 00:21:33.731 }, 00:21:33.731 { 00:21:33.731 "method": "bdev_nvme_attach_controller", 00:21:33.731 "params": { 00:21:33.731 "name": "TLSTEST", 00:21:33.731 "trtype": "TCP", 00:21:33.731 "adrfam": "IPv4", 00:21:33.731 "traddr": "10.0.0.2", 00:21:33.731 "trsvcid": "4420", 00:21:33.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.731 "prchk_reftag": false, 00:21:33.731 "prchk_guard": false, 00:21:33.731 "ctrlr_loss_timeout_sec": 0, 00:21:33.731 "reconnect_delay_sec": 0, 00:21:33.731 "fast_io_fail_timeout_sec": 0, 00:21:33.731 "psk": "key0", 00:21:33.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:33.731 "hdgst": false, 00:21:33.731 "ddgst": false, 00:21:33.731 "multipath": "multipath" 00:21:33.731 } 00:21:33.731 }, 00:21:33.731 { 00:21:33.731 "method": "bdev_nvme_set_hotplug", 00:21:33.731 "params": { 00:21:33.731 "period_us": 100000, 00:21:33.731 "enable": false 00:21:33.731 } 00:21:33.731 }, 00:21:33.731 { 00:21:33.731 "method": "bdev_wait_for_examine" 00:21:33.731 } 00:21:33.731 ] 00:21:33.731 }, 00:21:33.731 { 00:21:33.731 "subsystem": "nbd", 00:21:33.731 "config": [] 00:21:33.731 } 00:21:33.731 ] 00:21:33.731 }' 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 4157569 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4157569 ']' 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4157569 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4157569 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4157569' 00:21:33.731 killing process with pid 4157569 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4157569 00:21:33.731 Received shutdown signal, test time was about 10.000000 seconds 00:21:33.731 00:21:33.731 Latency(us) 00:21:33.731 [2024-11-19T10:15:42.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.731 [2024-11-19T10:15:42.083Z] =================================================================================================================== 00:21:33.731 [2024-11-19T10:15:42.083Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:33.731 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4157569 00:21:33.731 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 4157202 00:21:33.731 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4157202 ']' 00:21:33.731 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4157202 00:21:33.731 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.731 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.731 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4157202 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4157202' 00:21:33.993 killing process with pid 4157202 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4157202 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4157202 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.993 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:33.993 "subsystems": [ 00:21:33.993 { 00:21:33.993 "subsystem": "keyring", 00:21:33.993 "config": [ 00:21:33.993 { 00:21:33.993 "method": "keyring_file_add_key", 00:21:33.993 "params": { 00:21:33.993 "name": "key0", 00:21:33.993 "path": "/tmp/tmp.Kfv4rzTWpm" 00:21:33.993 } 00:21:33.993 } 00:21:33.993 ] 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "subsystem": "iobuf", 00:21:33.993 "config": [ 00:21:33.993 { 00:21:33.993 "method": "iobuf_set_options", 00:21:33.993 "params": { 00:21:33.993 "small_pool_count": 8192, 00:21:33.993 "large_pool_count": 1024, 00:21:33.993 "small_bufsize": 8192, 00:21:33.993 "large_bufsize": 135168, 00:21:33.993 "enable_numa": false 00:21:33.993 } 00:21:33.993 } 00:21:33.993 ] 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "subsystem": "sock", 00:21:33.993 "config": [ 00:21:33.993 { 00:21:33.993 "method": "sock_set_default_impl", 00:21:33.993 "params": { 00:21:33.993 "impl_name": "posix" 00:21:33.993 } 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "method": "sock_impl_set_options", 00:21:33.993 "params": { 00:21:33.993 "impl_name": "ssl", 00:21:33.993 "recv_buf_size": 4096, 00:21:33.993 "send_buf_size": 4096, 00:21:33.993 "enable_recv_pipe": true, 00:21:33.993 "enable_quickack": false, 00:21:33.993 "enable_placement_id": 0, 00:21:33.993 "enable_zerocopy_send_server": true, 00:21:33.993 "enable_zerocopy_send_client": false, 00:21:33.993 "zerocopy_threshold": 0, 00:21:33.993 "tls_version": 0, 00:21:33.993 "enable_ktls": false 00:21:33.993 } 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "method": "sock_impl_set_options", 00:21:33.993 "params": { 00:21:33.993 "impl_name": "posix", 00:21:33.993 "recv_buf_size": 2097152, 00:21:33.993 "send_buf_size": 2097152, 00:21:33.993 "enable_recv_pipe": true, 00:21:33.993 "enable_quickack": false, 00:21:33.993 "enable_placement_id": 0, 00:21:33.993 "enable_zerocopy_send_server": true, 00:21:33.993 "enable_zerocopy_send_client": false, 00:21:33.993 "zerocopy_threshold": 0, 00:21:33.993 "tls_version": 0, 00:21:33.993 "enable_ktls": false 00:21:33.993 } 00:21:33.993 } 00:21:33.993 ] 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "subsystem": "vmd", 00:21:33.993 "config": [] 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "subsystem": "accel", 00:21:33.993 "config": [ 00:21:33.993 { 00:21:33.993 "method": "accel_set_options", 00:21:33.993 "params": { 00:21:33.993 "small_cache_size": 128, 00:21:33.993 "large_cache_size": 16, 00:21:33.993 "task_count": 2048, 00:21:33.993 "sequence_count": 2048, 00:21:33.993 "buf_count": 2048 00:21:33.993 } 00:21:33.993 } 00:21:33.993 ] 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "subsystem": "bdev", 00:21:33.993 "config": [ 00:21:33.993 { 00:21:33.993 "method": "bdev_set_options", 00:21:33.993 "params": { 00:21:33.993 "bdev_io_pool_size": 65535, 00:21:33.993 "bdev_io_cache_size": 256, 00:21:33.993 "bdev_auto_examine": true, 00:21:33.993 "iobuf_small_cache_size": 128, 00:21:33.993 "iobuf_large_cache_size": 16 00:21:33.993 } 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "method": "bdev_raid_set_options", 00:21:33.993 "params": { 00:21:33.993 "process_window_size_kb": 1024, 00:21:33.993 "process_max_bandwidth_mb_sec": 0 00:21:33.993 } 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "method": "bdev_iscsi_set_options", 00:21:33.993 "params": { 00:21:33.993 "timeout_sec": 30 00:21:33.993 } 00:21:33.993 }, 00:21:33.993 { 00:21:33.993 "method": "bdev_nvme_set_options", 00:21:33.993 "params": { 00:21:33.993 "action_on_timeout": "none", 00:21:33.993 "timeout_us": 0, 00:21:33.993 "timeout_admin_us": 0, 00:21:33.993 "keep_alive_timeout_ms": 10000, 00:21:33.993 "arbitration_burst": 0, 00:21:33.993 "low_priority_weight": 0, 00:21:33.993 "medium_priority_weight": 0, 00:21:33.993 "high_priority_weight": 0, 00:21:33.993 "nvme_adminq_poll_period_us": 10000, 00:21:33.993 "nvme_ioq_poll_period_us": 0, 00:21:33.993 "io_queue_requests": 0, 00:21:33.993 "delay_cmd_submit": true, 00:21:33.993 "transport_retry_count": 4, 00:21:33.993 "bdev_retry_count": 3, 00:21:33.993 "transport_ack_timeout": 0, 00:21:33.993 "ctrlr_loss_timeout_sec": 0, 00:21:33.993 "reconnect_delay_sec": 0, 00:21:33.993 "fast_io_fail_timeout_sec": 0, 00:21:33.993 "disable_auto_failback": false, 00:21:33.993 "generate_uuids": false, 00:21:33.993 "transport_tos": 0, 00:21:33.993 "nvme_error_stat": false, 00:21:33.993 "rdma_srq_size": 0, 00:21:33.993 "io_path_stat": false, 00:21:33.993 "allow_accel_sequence": false, 00:21:33.993 "rdma_max_cq_size": 0, 00:21:33.993 "rdma_cm_event_timeout_ms": 0, 00:21:33.993 "dhchap_digests": [ 00:21:33.993 "sha256", 00:21:33.993 "sha384", 00:21:33.993 "sha512" 00:21:33.993 ], 00:21:33.993 "dhchap_dhgroups": [ 00:21:33.993 "null", 00:21:33.993 "ffdhe2048", 00:21:33.993 "ffdhe3072", 00:21:33.993 "ffdhe4096", 00:21:33.993 "ffdhe6144", 00:21:33.993 "ffdhe8192" 00:21:33.993 ] 00:21:33.993 } 00:21:33.993 }, 00:21:33.993 { 00:21:33.994 "method": "bdev_nvme_set_hotplug", 00:21:33.994 "params": { 00:21:33.994 "period_us": 100000, 00:21:33.994 "enable": false 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "bdev_malloc_create", 00:21:33.994 "params": { 00:21:33.994 "name": "malloc0", 00:21:33.994 "num_blocks": 8192, 00:21:33.994 "block_size": 4096, 00:21:33.994 "physical_block_size": 4096, 00:21:33.994 "uuid": "66eaf499-5175-4740-9a06-3aa5ca064fe7", 00:21:33.994 "optimal_io_boundary": 0, 00:21:33.994 "md_size": 0, 00:21:33.994 "dif_type": 0, 00:21:33.994 "dif_is_head_of_md": false, 00:21:33.994 "dif_pi_format": 0 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "bdev_wait_for_examine" 00:21:33.994 } 00:21:33.994 ] 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "subsystem": "nbd", 00:21:33.994 "config": [] 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "subsystem": "scheduler", 00:21:33.994 "config": [ 00:21:33.994 { 00:21:33.994 "method": "framework_set_scheduler", 00:21:33.994 "params": { 00:21:33.994 "name": "static" 00:21:33.994 } 00:21:33.994 } 00:21:33.994 ] 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "subsystem": "nvmf", 00:21:33.994 "config": [ 00:21:33.994 { 00:21:33.994 "method": "nvmf_set_config", 00:21:33.994 "params": { 00:21:33.994 "discovery_filter": "match_any", 00:21:33.994 "admin_cmd_passthru": { 00:21:33.994 "identify_ctrlr": false 00:21:33.994 }, 00:21:33.994 "dhchap_digests": [ 00:21:33.994 "sha256", 00:21:33.994 "sha384", 00:21:33.994 "sha512" 00:21:33.994 ], 00:21:33.994 "dhchap_dhgroups": [ 00:21:33.994 "null", 00:21:33.994 "ffdhe2048", 00:21:33.994 "ffdhe3072", 00:21:33.994 "ffdhe4096", 00:21:33.994 "ffdhe6144", 00:21:33.994 "ffdhe8192" 00:21:33.994 ] 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "nvmf_set_max_subsystems", 00:21:33.994 "params": { 00:21:33.994 "max_subsystems": 1024 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "nvmf_set_crdt", 00:21:33.994 "params": { 00:21:33.994 "crdt1": 0, 00:21:33.994 "crdt2": 0, 00:21:33.994 "crdt3": 0 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "nvmf_create_transport", 00:21:33.994 "params": { 00:21:33.994 "trtype": "TCP", 00:21:33.994 "max_queue_depth": 128, 00:21:33.994 "max_io_qpairs_per_ctrlr": 127, 00:21:33.994 "in_capsule_data_size": 4096, 00:21:33.994 "max_io_size": 131072, 00:21:33.994 "io_unit_size": 131072, 00:21:33.994 "max_aq_depth": 128, 00:21:33.994 "num_shared_buffers": 511, 00:21:33.994 "buf_cache_size": 4294967295, 00:21:33.994 "dif_insert_or_strip": false, 00:21:33.994 "zcopy": false, 00:21:33.994 "c2h_success": false, 00:21:33.994 "sock_priority": 0, 00:21:33.994 "abort_timeout_sec": 1, 00:21:33.994 "ack_timeout": 0, 00:21:33.994 "data_wr_pool_size": 0 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "nvmf_create_subsystem", 00:21:33.994 "params": { 00:21:33.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.994 "allow_any_host": false, 00:21:33.994 "serial_number": "SPDK00000000000001", 00:21:33.994 "model_number": "SPDK bdev Controller", 00:21:33.994 "max_namespaces": 10, 00:21:33.994 "min_cntlid": 1, 00:21:33.994 "max_cntlid": 65519, 00:21:33.994 "ana_reporting": false 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "nvmf_subsystem_add_host", 00:21:33.994 "params": { 00:21:33.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.994 "host": "nqn.2016-06.io.spdk:host1", 00:21:33.994 "psk": "key0" 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "nvmf_subsystem_add_ns", 00:21:33.994 "params": { 00:21:33.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.994 "namespace": { 00:21:33.994 "nsid": 1, 00:21:33.994 "bdev_name": "malloc0", 00:21:33.994 "nguid": "66EAF499517547409A063AA5CA064FE7", 00:21:33.994 "uuid": "66eaf499-5175-4740-9a06-3aa5ca064fe7", 00:21:33.994 "no_auto_visible": false 00:21:33.994 } 00:21:33.994 } 00:21:33.994 }, 00:21:33.994 { 00:21:33.994 "method": "nvmf_subsystem_add_listener", 00:21:33.994 "params": { 00:21:33.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.994 "listen_address": { 00:21:33.994 "trtype": "TCP", 00:21:33.994 "adrfam": "IPv4", 00:21:33.994 "traddr": "10.0.0.2", 00:21:33.994 "trsvcid": "4420" 00:21:33.994 }, 00:21:33.994 "secure_channel": true 00:21:33.994 } 00:21:33.994 } 00:21:33.994 ] 00:21:33.994 } 00:21:33.994 ] 00:21:33.994 }' 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4157915 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4157915 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4157915 ']' 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.994 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 [2024-11-19 11:15:42.306747] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:33.994 [2024-11-19 11:15:42.306807] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.255 [2024-11-19 11:15:42.406275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.255 [2024-11-19 11:15:42.436797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.255 [2024-11-19 11:15:42.436827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.255 [2024-11-19 11:15:42.436833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.255 [2024-11-19 11:15:42.436837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.255 [2024-11-19 11:15:42.436841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.255 [2024-11-19 11:15:42.437345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.516 [2024-11-19 11:15:42.629987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.516 [2024-11-19 11:15:42.662016] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.516 [2024-11-19 11:15:42.662202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=4157951 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 4157951 /var/tmp/bdevperf.sock 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4157951 ']' 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.777 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:34.777 "subsystems": [ 00:21:34.777 { 00:21:34.777 "subsystem": "keyring", 00:21:34.777 "config": [ 00:21:34.777 { 00:21:34.777 "method": "keyring_file_add_key", 00:21:34.777 "params": { 00:21:34.777 "name": "key0", 00:21:34.777 "path": "/tmp/tmp.Kfv4rzTWpm" 00:21:34.777 } 00:21:34.777 } 00:21:34.777 ] 00:21:34.777 }, 00:21:34.777 { 00:21:34.777 "subsystem": "iobuf", 00:21:34.777 "config": [ 00:21:34.777 { 00:21:34.777 "method": "iobuf_set_options", 00:21:34.777 "params": { 00:21:34.777 "small_pool_count": 8192, 00:21:34.778 "large_pool_count": 1024, 00:21:34.778 "small_bufsize": 8192, 00:21:34.778 "large_bufsize": 135168, 00:21:34.778 "enable_numa": false 00:21:34.778 } 00:21:34.778 } 00:21:34.778 ] 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "subsystem": "sock", 00:21:34.778 "config": [ 00:21:34.778 { 00:21:34.778 "method": "sock_set_default_impl", 00:21:34.778 "params": { 00:21:34.778 "impl_name": "posix" 00:21:34.778 } 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "method": "sock_impl_set_options", 00:21:34.778 "params": { 00:21:34.778 "impl_name": "ssl", 00:21:34.778 "recv_buf_size": 4096, 00:21:34.778 "send_buf_size": 4096, 00:21:34.778 "enable_recv_pipe": true, 00:21:34.778 "enable_quickack": false, 00:21:34.778 "enable_placement_id": 0, 00:21:34.778 "enable_zerocopy_send_server": true, 00:21:34.778 "enable_zerocopy_send_client": false, 00:21:34.778 "zerocopy_threshold": 0, 00:21:34.778 "tls_version": 0, 00:21:34.778 "enable_ktls": false 00:21:34.778 } 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "method": "sock_impl_set_options", 00:21:34.778 "params": { 00:21:34.778 "impl_name": "posix", 00:21:34.778 "recv_buf_size": 2097152, 00:21:34.778 "send_buf_size": 2097152, 00:21:34.778 "enable_recv_pipe": true, 00:21:34.778 "enable_quickack": false, 00:21:34.778 "enable_placement_id": 0, 00:21:34.778 "enable_zerocopy_send_server": true, 00:21:34.778 "enable_zerocopy_send_client": false, 00:21:34.778 "zerocopy_threshold": 0, 00:21:34.778 "tls_version": 0, 00:21:34.778 "enable_ktls": false 00:21:34.778 } 00:21:34.778 } 00:21:34.778 ] 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "subsystem": "vmd", 00:21:34.778 "config": [] 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "subsystem": "accel", 00:21:34.778 "config": [ 00:21:34.778 { 00:21:34.778 "method": "accel_set_options", 00:21:34.778 "params": { 00:21:34.778 "small_cache_size": 128, 00:21:34.778 "large_cache_size": 16, 00:21:34.778 "task_count": 2048, 00:21:34.778 "sequence_count": 2048, 00:21:34.778 "buf_count": 2048 00:21:34.778 } 00:21:34.778 } 00:21:34.778 ] 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "subsystem": "bdev", 00:21:34.778 "config": [ 00:21:34.778 { 00:21:34.778 "method": "bdev_set_options", 00:21:34.778 "params": { 00:21:34.778 "bdev_io_pool_size": 65535, 00:21:34.778 "bdev_io_cache_size": 256, 00:21:34.778 "bdev_auto_examine": true, 00:21:34.778 "iobuf_small_cache_size": 128, 00:21:34.778 "iobuf_large_cache_size": 16 00:21:34.778 } 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "method": "bdev_raid_set_options", 00:21:34.778 "params": { 00:21:34.778 "process_window_size_kb": 1024, 00:21:34.778 "process_max_bandwidth_mb_sec": 0 00:21:34.778 } 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "method": "bdev_iscsi_set_options", 00:21:34.778 "params": { 00:21:34.778 "timeout_sec": 30 00:21:34.778 } 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "method": "bdev_nvme_set_options", 00:21:34.778 "params": { 00:21:34.778 "action_on_timeout": "none", 00:21:34.778 "timeout_us": 0, 00:21:34.778 "timeout_admin_us": 0, 00:21:34.778 "keep_alive_timeout_ms": 10000, 00:21:34.778 "arbitration_burst": 0, 00:21:34.778 "low_priority_weight": 0, 00:21:34.778 "medium_priority_weight": 0, 00:21:34.778 "high_priority_weight": 0, 00:21:34.778 "nvme_adminq_poll_period_us": 10000, 00:21:34.778 "nvme_ioq_poll_period_us": 0, 00:21:34.778 "io_queue_requests": 512, 00:21:34.778 "delay_cmd_submit": true, 00:21:34.778 "transport_retry_count": 4, 00:21:34.778 "bdev_retry_count": 3, 00:21:34.778 "transport_ack_timeout": 0, 00:21:34.778 "ctrlr_loss_timeout_sec": 0, 00:21:34.778 "reconnect_delay_sec": 0, 00:21:34.778 "fast_io_fail_timeout_sec": 0, 00:21:34.778 "disable_auto_failback": false, 00:21:34.778 "generate_uuids": false, 00:21:34.778 "transport_tos": 0, 00:21:34.778 "nvme_error_stat": false, 00:21:34.778 "rdma_srq_size": 0, 00:21:34.778 "io_path_stat": false, 00:21:34.778 "allow_accel_sequence": false, 00:21:34.778 "rdma_max_cq_size": 0, 00:21:34.778 "rdma_cm_event_timeout_ms": 0, 00:21:34.778 "dhchap_digests": [ 00:21:34.778 "sha256", 00:21:34.778 "sha384", 00:21:34.778 "sha512" 00:21:34.778 ], 00:21:34.778 "dhchap_dhgroups": [ 00:21:34.778 "null", 00:21:34.778 "ffdhe2048", 00:21:34.778 "ffdhe3072", 00:21:34.778 "ffdhe4096", 00:21:34.778 "ffdhe6144", 00:21:34.778 "ffdhe8192" 00:21:34.778 ] 00:21:34.778 } 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "method": "bdev_nvme_attach_controller", 00:21:34.778 "params": { 00:21:34.778 "name": "TLSTEST", 00:21:34.778 "trtype": "TCP", 00:21:34.778 "adrfam": "IPv4", 00:21:34.778 "traddr": "10.0.0.2", 00:21:34.778 "trsvcid": "4420", 00:21:34.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.778 "prchk_reftag": false, 00:21:34.778 "prchk_guard": false, 00:21:34.778 "ctrlr_loss_timeout_sec": 0, 00:21:34.778 "reconnect_delay_sec": 0, 00:21:34.778 "fast_io_fail_timeout_sec": 0, 00:21:34.778 "psk": "key0", 00:21:34.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.778 "hdgst": false, 00:21:34.778 "ddgst": false, 00:21:34.778 "multipath": "multipath" 00:21:34.778 } 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "method": "bdev_nvme_set_hotplug", 00:21:34.778 "params": { 00:21:34.778 "period_us": 100000, 00:21:34.778 "enable": false 00:21:34.778 } 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "method": "bdev_wait_for_examine" 00:21:34.778 } 00:21:34.778 ] 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "subsystem": "nbd", 00:21:34.778 "config": [] 00:21:34.778 } 00:21:34.778 ] 00:21:34.778 }' 00:21:35.040 [2024-11-19 11:15:43.183522] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:35.040 [2024-11-19 11:15:43.183575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157951 ] 00:21:35.040 [2024-11-19 11:15:43.248012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.040 [2024-11-19 11:15:43.277131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.300 [2024-11-19 11:15:43.411332] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.870 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.870 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.870 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:35.870 Running I/O for 10 seconds... 00:21:37.753 4417.00 IOPS, 17.25 MiB/s [2024-11-19T10:15:47.489Z] 4854.00 IOPS, 18.96 MiB/s [2024-11-19T10:15:48.060Z] 5376.33 IOPS, 21.00 MiB/s [2024-11-19T10:15:49.444Z] 5594.50 IOPS, 21.85 MiB/s [2024-11-19T10:15:50.387Z] 5594.20 IOPS, 21.85 MiB/s [2024-11-19T10:15:51.330Z] 5494.00 IOPS, 21.46 MiB/s [2024-11-19T10:15:52.273Z] 5533.29 IOPS, 21.61 MiB/s [2024-11-19T10:15:53.213Z] 5428.12 IOPS, 21.20 MiB/s [2024-11-19T10:15:54.157Z] 5457.67 IOPS, 21.32 MiB/s [2024-11-19T10:15:54.157Z] 5399.80 IOPS, 21.09 MiB/s 00:21:45.805 Latency(us) 00:21:45.805 [2024-11-19T10:15:54.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.805 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:45.805 Verification LBA range: start 0x0 length 0x2000 00:21:45.805 TLSTESTn1 : 10.02 5403.50 21.11 0.00 0.00 23654.05 4969.81 85633.71 00:21:45.805 [2024-11-19T10:15:54.157Z] =================================================================================================================== 00:21:45.805 [2024-11-19T10:15:54.157Z] Total : 5403.50 21.11 0.00 0.00 23654.05 4969.81 85633.71 00:21:45.805 { 00:21:45.805 "results": [ 00:21:45.805 { 00:21:45.805 "job": "TLSTESTn1", 00:21:45.805 "core_mask": "0x4", 00:21:45.805 "workload": "verify", 00:21:45.805 "status": "finished", 00:21:45.805 "verify_range": { 00:21:45.805 "start": 0, 00:21:45.805 "length": 8192 00:21:45.805 }, 00:21:45.805 "queue_depth": 128, 00:21:45.805 "io_size": 4096, 00:21:45.805 "runtime": 10.016659, 00:21:45.805 "iops": 5403.49831216177, 00:21:45.805 "mibps": 21.107415281881913, 00:21:45.805 "io_failed": 0, 00:21:45.805 "io_timeout": 0, 00:21:45.805 "avg_latency_us": 23654.054535981526, 00:21:45.805 "min_latency_us": 4969.8133333333335, 00:21:45.805 "max_latency_us": 85633.70666666667 00:21:45.805 } 00:21:45.805 ], 00:21:45.805 "core_count": 1 00:21:45.805 } 00:21:45.805 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.805 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 4157951 00:21:45.805 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4157951 ']' 00:21:45.805 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4157951 00:21:45.805 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:45.805 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.805 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4157951 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4157951' 00:21:46.066 killing process with pid 4157951 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4157951 00:21:46.066 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.066 00:21:46.066 Latency(us) 00:21:46.066 [2024-11-19T10:15:54.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.066 [2024-11-19T10:15:54.418Z] =================================================================================================================== 00:21:46.066 [2024-11-19T10:15:54.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4157951 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 4157915 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4157915 ']' 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4157915 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4157915 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4157915' 00:21:46.066 killing process with pid 4157915 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4157915 00:21:46.066 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4157915 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4160284 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4160284 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4160284 ']' 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.328 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.328 [2024-11-19 11:15:54.508688] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:46.328 [2024-11-19 11:15:54.508745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.328 [2024-11-19 11:15:54.592856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.328 [2024-11-19 11:15:54.627420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.328 [2024-11-19 11:15:54.627452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.328 [2024-11-19 11:15:54.627459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.328 [2024-11-19 11:15:54.627466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.328 [2024-11-19 11:15:54.627472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.328 [2024-11-19 11:15:54.628046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Kfv4rzTWpm 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Kfv4rzTWpm 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:47.270 [2024-11-19 11:15:55.484415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.270 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:47.531 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:47.531 [2024-11-19 11:15:55.821263] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:47.531 [2024-11-19 11:15:55.821486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.531 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:47.792 malloc0 00:21:47.792 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:47.792 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:48.052 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=4160651 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 4160651 /var/tmp/bdevperf.sock 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4160651 ']' 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.313 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.313 [2024-11-19 11:15:56.504360] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:48.313 [2024-11-19 11:15:56.504411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160651 ] 00:21:48.313 [2024-11-19 11:15:56.594128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.313 [2024-11-19 11:15:56.624481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.574 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.574 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:48.574 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:48.574 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:48.835 [2024-11-19 11:15:57.035115] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.835 nvme0n1 00:21:48.835 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:49.097 Running I/O for 1 seconds... 00:21:50.040 3514.00 IOPS, 13.73 MiB/s 00:21:50.040 Latency(us) 00:21:50.040 [2024-11-19T10:15:58.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.040 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:50.040 Verification LBA range: start 0x0 length 0x2000 00:21:50.040 nvme0n1 : 1.05 3462.92 13.53 0.00 0.00 36149.13 7427.41 57234.77 00:21:50.040 [2024-11-19T10:15:58.392Z] =================================================================================================================== 00:21:50.040 [2024-11-19T10:15:58.392Z] Total : 3462.92 13.53 0.00 0.00 36149.13 7427.41 57234.77 00:21:50.040 { 00:21:50.040 "results": [ 00:21:50.040 { 00:21:50.040 "job": "nvme0n1", 00:21:50.040 "core_mask": "0x2", 00:21:50.040 "workload": "verify", 00:21:50.040 "status": "finished", 00:21:50.040 "verify_range": { 00:21:50.040 "start": 0, 00:21:50.040 "length": 8192 00:21:50.040 }, 00:21:50.040 "queue_depth": 128, 00:21:50.040 "io_size": 4096, 00:21:50.040 "runtime": 1.051715, 00:21:50.040 "iops": 3462.9153335266683, 00:21:50.040 "mibps": 13.527013021588548, 00:21:50.040 "io_failed": 0, 00:21:50.040 "io_timeout": 0, 00:21:50.040 "avg_latency_us": 36149.13066080908, 00:21:50.040 "min_latency_us": 7427.413333333333, 00:21:50.040 "max_latency_us": 57234.77333333333 00:21:50.040 } 00:21:50.040 ], 00:21:50.040 "core_count": 1 00:21:50.040 } 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 4160651 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4160651 ']' 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4160651 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4160651 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4160651' 00:21:50.040 killing process with pid 4160651 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4160651 00:21:50.040 Received shutdown signal, test time was about 1.000000 seconds 00:21:50.040 00:21:50.040 Latency(us) 00:21:50.040 [2024-11-19T10:15:58.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.040 [2024-11-19T10:15:58.392Z] =================================================================================================================== 00:21:50.040 [2024-11-19T10:15:58.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.040 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4160651 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 4160284 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4160284 ']' 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4160284 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4160284 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4160284' 00:21:50.302 killing process with pid 4160284 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4160284 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4160284 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4161005 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4161005 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:50.302 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4161005 ']' 00:21:50.563 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.563 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.563 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.563 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.563 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.563 [2024-11-19 11:15:58.703815] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:50.563 [2024-11-19 11:15:58.703872] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.563 [2024-11-19 11:15:58.787869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.563 [2024-11-19 11:15:58.821661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.563 [2024-11-19 11:15:58.821698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.563 [2024-11-19 11:15:58.821705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.563 [2024-11-19 11:15:58.821712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.563 [2024-11-19 11:15:58.821718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.563 [2024-11-19 11:15:58.822300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 [2024-11-19 11:15:59.546458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.505 malloc0 00:21:51.505 [2024-11-19 11:15:59.573229] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.505 [2024-11-19 11:15:59.573449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=4161352 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 4161352 /var/tmp/bdevperf.sock 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4161352 ']' 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.505 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 [2024-11-19 11:15:59.663401] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:51.505 [2024-11-19 11:15:59.663450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161352 ] 00:21:51.505 [2024-11-19 11:15:59.751998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.505 [2024-11-19 11:15:59.781707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.450 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.450 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:52.450 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Kfv4rzTWpm 00:21:52.450 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:52.450 [2024-11-19 11:16:00.741452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.711 nvme0n1 00:21:52.711 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.711 Running I/O for 1 seconds... 00:21:53.655 4914.00 IOPS, 19.20 MiB/s 00:21:53.655 Latency(us) 00:21:53.655 [2024-11-19T10:16:02.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.655 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:53.655 Verification LBA range: start 0x0 length 0x2000 00:21:53.655 nvme0n1 : 1.02 4949.83 19.34 0.00 0.00 25648.78 6062.08 49152.00 00:21:53.655 [2024-11-19T10:16:02.007Z] =================================================================================================================== 00:21:53.655 [2024-11-19T10:16:02.008Z] Total : 4949.83 19.34 0.00 0.00 25648.78 6062.08 49152.00 00:21:53.656 { 00:21:53.656 "results": [ 00:21:53.656 { 00:21:53.656 "job": "nvme0n1", 00:21:53.656 "core_mask": "0x2", 00:21:53.656 "workload": "verify", 00:21:53.656 "status": "finished", 00:21:53.656 "verify_range": { 00:21:53.656 "start": 0, 00:21:53.656 "length": 8192 00:21:53.656 }, 00:21:53.656 "queue_depth": 128, 00:21:53.656 "io_size": 4096, 00:21:53.656 "runtime": 1.018823, 00:21:53.656 "iops": 4949.829361920569, 00:21:53.656 "mibps": 19.33527094500222, 00:21:53.656 "io_failed": 0, 00:21:53.656 "io_timeout": 0, 00:21:53.656 "avg_latency_us": 25648.778922598984, 00:21:53.656 "min_latency_us": 6062.08, 00:21:53.656 "max_latency_us": 49152.0 00:21:53.656 } 00:21:53.656 ], 00:21:53.656 "core_count": 1 00:21:53.656 } 00:21:53.656 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:53.656 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.656 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.917 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.917 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:53.917 "subsystems": [ 00:21:53.917 { 00:21:53.917 "subsystem": "keyring", 00:21:53.917 "config": [ 00:21:53.917 { 00:21:53.917 "method": "keyring_file_add_key", 00:21:53.917 "params": { 00:21:53.917 "name": "key0", 00:21:53.917 "path": "/tmp/tmp.Kfv4rzTWpm" 00:21:53.917 } 00:21:53.917 } 00:21:53.917 ] 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "subsystem": "iobuf", 00:21:53.917 "config": [ 00:21:53.917 { 00:21:53.917 "method": "iobuf_set_options", 00:21:53.917 "params": { 00:21:53.917 "small_pool_count": 8192, 00:21:53.917 "large_pool_count": 1024, 00:21:53.917 "small_bufsize": 8192, 00:21:53.917 "large_bufsize": 135168, 00:21:53.917 "enable_numa": false 00:21:53.917 } 00:21:53.917 } 00:21:53.917 ] 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "subsystem": "sock", 00:21:53.917 "config": [ 00:21:53.917 { 00:21:53.917 "method": "sock_set_default_impl", 00:21:53.917 "params": { 00:21:53.917 "impl_name": "posix" 00:21:53.917 } 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "method": "sock_impl_set_options", 00:21:53.917 "params": { 00:21:53.917 "impl_name": "ssl", 00:21:53.917 "recv_buf_size": 4096, 00:21:53.917 "send_buf_size": 4096, 00:21:53.917 "enable_recv_pipe": true, 00:21:53.917 "enable_quickack": false, 00:21:53.917 "enable_placement_id": 0, 00:21:53.917 "enable_zerocopy_send_server": true, 00:21:53.917 "enable_zerocopy_send_client": false, 00:21:53.917 "zerocopy_threshold": 0, 00:21:53.917 "tls_version": 0, 00:21:53.917 "enable_ktls": false 00:21:53.917 } 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "method": "sock_impl_set_options", 00:21:53.917 "params": { 00:21:53.917 "impl_name": "posix", 00:21:53.917 "recv_buf_size": 2097152, 00:21:53.917 "send_buf_size": 2097152, 00:21:53.917 "enable_recv_pipe": true, 00:21:53.917 "enable_quickack": false, 00:21:53.917 "enable_placement_id": 0, 00:21:53.917 "enable_zerocopy_send_server": true, 00:21:53.917 "enable_zerocopy_send_client": false, 00:21:53.917 "zerocopy_threshold": 0, 00:21:53.917 "tls_version": 0, 00:21:53.917 "enable_ktls": false 00:21:53.917 } 00:21:53.917 } 00:21:53.917 ] 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "subsystem": "vmd", 00:21:53.917 "config": [] 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "subsystem": "accel", 00:21:53.917 "config": [ 00:21:53.917 { 00:21:53.917 "method": "accel_set_options", 00:21:53.917 "params": { 00:21:53.917 "small_cache_size": 128, 00:21:53.917 "large_cache_size": 16, 00:21:53.917 "task_count": 2048, 00:21:53.917 "sequence_count": 2048, 00:21:53.917 "buf_count": 2048 00:21:53.917 } 00:21:53.917 } 00:21:53.917 ] 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "subsystem": "bdev", 00:21:53.917 "config": [ 00:21:53.917 { 00:21:53.917 "method": "bdev_set_options", 00:21:53.917 "params": { 00:21:53.917 "bdev_io_pool_size": 65535, 00:21:53.917 "bdev_io_cache_size": 256, 00:21:53.917 "bdev_auto_examine": true, 00:21:53.917 "iobuf_small_cache_size": 128, 00:21:53.917 "iobuf_large_cache_size": 16 00:21:53.917 } 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "method": "bdev_raid_set_options", 00:21:53.917 "params": { 00:21:53.917 "process_window_size_kb": 1024, 00:21:53.917 "process_max_bandwidth_mb_sec": 0 00:21:53.917 } 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "method": "bdev_iscsi_set_options", 00:21:53.917 "params": { 00:21:53.917 "timeout_sec": 30 00:21:53.917 } 00:21:53.917 }, 00:21:53.917 { 00:21:53.917 "method": "bdev_nvme_set_options", 00:21:53.917 "params": { 00:21:53.917 "action_on_timeout": "none", 00:21:53.917 "timeout_us": 0, 00:21:53.917 "timeout_admin_us": 0, 00:21:53.917 "keep_alive_timeout_ms": 10000, 00:21:53.917 "arbitration_burst": 0, 00:21:53.917 "low_priority_weight": 0, 00:21:53.917 "medium_priority_weight": 0, 00:21:53.917 "high_priority_weight": 0, 00:21:53.917 "nvme_adminq_poll_period_us": 10000, 00:21:53.917 "nvme_ioq_poll_period_us": 0, 00:21:53.917 "io_queue_requests": 0, 00:21:53.917 "delay_cmd_submit": true, 00:21:53.917 "transport_retry_count": 4, 00:21:53.917 "bdev_retry_count": 3, 00:21:53.917 "transport_ack_timeout": 0, 00:21:53.917 "ctrlr_loss_timeout_sec": 0, 00:21:53.917 "reconnect_delay_sec": 0, 00:21:53.917 "fast_io_fail_timeout_sec": 0, 00:21:53.917 "disable_auto_failback": false, 00:21:53.917 "generate_uuids": false, 00:21:53.917 "transport_tos": 0, 00:21:53.917 "nvme_error_stat": false, 00:21:53.917 "rdma_srq_size": 0, 00:21:53.917 "io_path_stat": false, 00:21:53.917 "allow_accel_sequence": false, 00:21:53.917 "rdma_max_cq_size": 0, 00:21:53.917 "rdma_cm_event_timeout_ms": 0, 00:21:53.917 "dhchap_digests": [ 00:21:53.917 "sha256", 00:21:53.917 "sha384", 00:21:53.917 "sha512" 00:21:53.918 ], 00:21:53.918 "dhchap_dhgroups": [ 00:21:53.918 "null", 00:21:53.918 "ffdhe2048", 00:21:53.918 "ffdhe3072", 00:21:53.918 "ffdhe4096", 00:21:53.918 "ffdhe6144", 00:21:53.918 "ffdhe8192" 00:21:53.918 ] 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "bdev_nvme_set_hotplug", 00:21:53.918 "params": { 00:21:53.918 "period_us": 100000, 00:21:53.918 "enable": false 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "bdev_malloc_create", 00:21:53.918 "params": { 00:21:53.918 "name": "malloc0", 00:21:53.918 "num_blocks": 8192, 00:21:53.918 "block_size": 4096, 00:21:53.918 "physical_block_size": 4096, 00:21:53.918 "uuid": "4985c7c2-3ead-4eaf-b6bd-8ad6fe4cfb50", 00:21:53.918 "optimal_io_boundary": 0, 00:21:53.918 "md_size": 0, 00:21:53.918 "dif_type": 0, 00:21:53.918 "dif_is_head_of_md": false, 00:21:53.918 "dif_pi_format": 0 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "bdev_wait_for_examine" 00:21:53.918 } 00:21:53.918 ] 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "subsystem": "nbd", 00:21:53.918 "config": [] 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "subsystem": "scheduler", 00:21:53.918 "config": [ 00:21:53.918 { 00:21:53.918 "method": "framework_set_scheduler", 00:21:53.918 "params": { 00:21:53.918 "name": "static" 00:21:53.918 } 00:21:53.918 } 00:21:53.918 ] 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "subsystem": "nvmf", 00:21:53.918 "config": [ 00:21:53.918 { 00:21:53.918 "method": "nvmf_set_config", 00:21:53.918 "params": { 00:21:53.918 "discovery_filter": "match_any", 00:21:53.918 "admin_cmd_passthru": { 00:21:53.918 "identify_ctrlr": false 00:21:53.918 }, 00:21:53.918 "dhchap_digests": [ 00:21:53.918 "sha256", 00:21:53.918 "sha384", 00:21:53.918 "sha512" 00:21:53.918 ], 00:21:53.918 "dhchap_dhgroups": [ 00:21:53.918 "null", 00:21:53.918 "ffdhe2048", 00:21:53.918 "ffdhe3072", 00:21:53.918 "ffdhe4096", 00:21:53.918 "ffdhe6144", 00:21:53.918 "ffdhe8192" 00:21:53.918 ] 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "nvmf_set_max_subsystems", 00:21:53.918 "params": { 00:21:53.918 "max_subsystems": 1024 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "nvmf_set_crdt", 00:21:53.918 "params": { 00:21:53.918 "crdt1": 0, 00:21:53.918 "crdt2": 0, 00:21:53.918 "crdt3": 0 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "nvmf_create_transport", 00:21:53.918 "params": { 00:21:53.918 "trtype": "TCP", 00:21:53.918 "max_queue_depth": 128, 00:21:53.918 "max_io_qpairs_per_ctrlr": 127, 00:21:53.918 "in_capsule_data_size": 4096, 00:21:53.918 "max_io_size": 131072, 00:21:53.918 "io_unit_size": 131072, 00:21:53.918 "max_aq_depth": 128, 00:21:53.918 "num_shared_buffers": 511, 00:21:53.918 "buf_cache_size": 4294967295, 00:21:53.918 "dif_insert_or_strip": false, 00:21:53.918 "zcopy": false, 00:21:53.918 "c2h_success": false, 00:21:53.918 "sock_priority": 0, 00:21:53.918 "abort_timeout_sec": 1, 00:21:53.918 "ack_timeout": 0, 00:21:53.918 "data_wr_pool_size": 0 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "nvmf_create_subsystem", 00:21:53.918 "params": { 00:21:53.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.918 "allow_any_host": false, 00:21:53.918 "serial_number": "00000000000000000000", 00:21:53.918 "model_number": "SPDK bdev Controller", 00:21:53.918 "max_namespaces": 32, 00:21:53.918 "min_cntlid": 1, 00:21:53.918 "max_cntlid": 65519, 00:21:53.918 "ana_reporting": false 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "nvmf_subsystem_add_host", 00:21:53.918 "params": { 00:21:53.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.918 "host": "nqn.2016-06.io.spdk:host1", 00:21:53.918 "psk": "key0" 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "nvmf_subsystem_add_ns", 00:21:53.918 "params": { 00:21:53.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.918 "namespace": { 00:21:53.918 "nsid": 1, 00:21:53.918 "bdev_name": "malloc0", 00:21:53.918 "nguid": "4985C7C23EAD4EAFB6BD8AD6FE4CFB50", 00:21:53.918 "uuid": "4985c7c2-3ead-4eaf-b6bd-8ad6fe4cfb50", 00:21:53.918 "no_auto_visible": false 00:21:53.918 } 00:21:53.918 } 00:21:53.918 }, 00:21:53.918 { 00:21:53.918 "method": "nvmf_subsystem_add_listener", 00:21:53.918 "params": { 00:21:53.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.918 "listen_address": { 00:21:53.918 "trtype": "TCP", 00:21:53.918 "adrfam": "IPv4", 00:21:53.918 "traddr": "10.0.0.2", 00:21:53.918 "trsvcid": "4420" 00:21:53.918 }, 00:21:53.918 "secure_channel": false, 00:21:53.918 "sock_impl": "ssl" 00:21:53.918 } 00:21:53.918 } 00:21:53.918 ] 00:21:53.918 } 00:21:53.918 ] 00:21:53.918 }' 00:21:53.918 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:54.180 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:54.180 "subsystems": [ 00:21:54.180 { 00:21:54.180 "subsystem": "keyring", 00:21:54.180 "config": [ 00:21:54.180 { 00:21:54.180 "method": "keyring_file_add_key", 00:21:54.180 "params": { 00:21:54.180 "name": "key0", 00:21:54.180 "path": "/tmp/tmp.Kfv4rzTWpm" 00:21:54.180 } 00:21:54.180 } 00:21:54.180 ] 00:21:54.180 }, 00:21:54.180 { 00:21:54.180 "subsystem": "iobuf", 00:21:54.180 "config": [ 00:21:54.180 { 00:21:54.180 "method": "iobuf_set_options", 00:21:54.180 "params": { 00:21:54.180 "small_pool_count": 8192, 00:21:54.180 "large_pool_count": 1024, 00:21:54.180 "small_bufsize": 8192, 00:21:54.180 "large_bufsize": 135168, 00:21:54.180 "enable_numa": false 00:21:54.180 } 00:21:54.180 } 00:21:54.180 ] 00:21:54.180 }, 00:21:54.180 { 00:21:54.180 "subsystem": "sock", 00:21:54.180 "config": [ 00:21:54.180 { 00:21:54.180 "method": "sock_set_default_impl", 00:21:54.180 "params": { 00:21:54.180 "impl_name": "posix" 00:21:54.180 } 00:21:54.180 }, 00:21:54.180 { 00:21:54.180 "method": "sock_impl_set_options", 00:21:54.180 "params": { 00:21:54.180 "impl_name": "ssl", 00:21:54.180 "recv_buf_size": 4096, 00:21:54.180 "send_buf_size": 4096, 00:21:54.180 "enable_recv_pipe": true, 00:21:54.180 "enable_quickack": false, 00:21:54.180 "enable_placement_id": 0, 00:21:54.180 "enable_zerocopy_send_server": true, 00:21:54.180 "enable_zerocopy_send_client": false, 00:21:54.180 "zerocopy_threshold": 0, 00:21:54.180 "tls_version": 0, 00:21:54.180 "enable_ktls": false 00:21:54.180 } 00:21:54.180 }, 00:21:54.180 { 00:21:54.180 "method": "sock_impl_set_options", 00:21:54.180 "params": { 00:21:54.180 "impl_name": "posix", 00:21:54.180 "recv_buf_size": 2097152, 00:21:54.180 "send_buf_size": 2097152, 00:21:54.180 "enable_recv_pipe": true, 00:21:54.180 "enable_quickack": false, 00:21:54.180 "enable_placement_id": 0, 00:21:54.180 "enable_zerocopy_send_server": true, 00:21:54.180 "enable_zerocopy_send_client": false, 00:21:54.180 "zerocopy_threshold": 0, 00:21:54.180 "tls_version": 0, 00:21:54.180 "enable_ktls": false 00:21:54.180 } 00:21:54.180 } 00:21:54.180 ] 00:21:54.180 }, 00:21:54.180 { 00:21:54.180 "subsystem": "vmd", 00:21:54.180 "config": [] 00:21:54.180 }, 00:21:54.180 { 00:21:54.180 "subsystem": "accel", 00:21:54.180 "config": [ 00:21:54.180 { 00:21:54.180 "method": "accel_set_options", 00:21:54.180 "params": { 00:21:54.180 "small_cache_size": 128, 00:21:54.180 "large_cache_size": 16, 00:21:54.180 "task_count": 2048, 00:21:54.180 "sequence_count": 2048, 00:21:54.180 "buf_count": 2048 00:21:54.180 } 00:21:54.180 } 00:21:54.180 ] 00:21:54.180 }, 00:21:54.180 { 00:21:54.180 "subsystem": "bdev", 00:21:54.181 "config": [ 00:21:54.181 { 00:21:54.181 "method": "bdev_set_options", 00:21:54.181 "params": { 00:21:54.181 "bdev_io_pool_size": 65535, 00:21:54.181 "bdev_io_cache_size": 256, 00:21:54.181 "bdev_auto_examine": true, 00:21:54.181 "iobuf_small_cache_size": 128, 00:21:54.181 "iobuf_large_cache_size": 16 00:21:54.181 } 00:21:54.181 }, 00:21:54.181 { 00:21:54.181 "method": "bdev_raid_set_options", 00:21:54.181 "params": { 00:21:54.181 "process_window_size_kb": 1024, 00:21:54.181 "process_max_bandwidth_mb_sec": 0 00:21:54.181 } 00:21:54.181 }, 00:21:54.181 { 00:21:54.181 "method": "bdev_iscsi_set_options", 00:21:54.181 "params": { 00:21:54.181 "timeout_sec": 30 00:21:54.181 } 00:21:54.181 }, 00:21:54.181 { 00:21:54.181 "method": "bdev_nvme_set_options", 00:21:54.181 "params": { 00:21:54.181 "action_on_timeout": "none", 00:21:54.181 "timeout_us": 0, 00:21:54.181 "timeout_admin_us": 0, 00:21:54.181 "keep_alive_timeout_ms": 10000, 00:21:54.181 "arbitration_burst": 0, 00:21:54.181 "low_priority_weight": 0, 00:21:54.181 "medium_priority_weight": 0, 00:21:54.181 "high_priority_weight": 0, 00:21:54.181 "nvme_adminq_poll_period_us": 10000, 00:21:54.181 "nvme_ioq_poll_period_us": 0, 00:21:54.181 "io_queue_requests": 512, 00:21:54.181 "delay_cmd_submit": true, 00:21:54.181 "transport_retry_count": 4, 00:21:54.181 "bdev_retry_count": 3, 00:21:54.181 "transport_ack_timeout": 0, 00:21:54.181 "ctrlr_loss_timeout_sec": 0, 00:21:54.181 "reconnect_delay_sec": 0, 00:21:54.181 "fast_io_fail_timeout_sec": 0, 00:21:54.181 "disable_auto_failback": false, 00:21:54.181 "generate_uuids": false, 00:21:54.181 "transport_tos": 0, 00:21:54.181 "nvme_error_stat": false, 00:21:54.181 "rdma_srq_size": 0, 00:21:54.181 "io_path_stat": false, 00:21:54.181 "allow_accel_sequence": false, 00:21:54.181 "rdma_max_cq_size": 0, 00:21:54.181 "rdma_cm_event_timeout_ms": 0, 00:21:54.181 "dhchap_digests": [ 00:21:54.181 "sha256", 00:21:54.181 "sha384", 00:21:54.181 "sha512" 00:21:54.181 ], 00:21:54.181 "dhchap_dhgroups": [ 00:21:54.181 "null", 00:21:54.181 "ffdhe2048", 00:21:54.181 "ffdhe3072", 00:21:54.181 "ffdhe4096", 00:21:54.181 "ffdhe6144", 00:21:54.181 "ffdhe8192" 00:21:54.181 ] 00:21:54.181 } 00:21:54.181 }, 00:21:54.181 { 00:21:54.181 "method": "bdev_nvme_attach_controller", 00:21:54.181 "params": { 00:21:54.181 "name": "nvme0", 00:21:54.181 "trtype": "TCP", 00:21:54.181 "adrfam": "IPv4", 00:21:54.181 "traddr": "10.0.0.2", 00:21:54.181 "trsvcid": "4420", 00:21:54.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.181 "prchk_reftag": false, 00:21:54.181 "prchk_guard": false, 00:21:54.181 "ctrlr_loss_timeout_sec": 0, 00:21:54.181 "reconnect_delay_sec": 0, 00:21:54.181 "fast_io_fail_timeout_sec": 0, 00:21:54.181 "psk": "key0", 00:21:54.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:54.181 "hdgst": false, 00:21:54.181 "ddgst": false, 00:21:54.181 "multipath": "multipath" 00:21:54.181 } 00:21:54.181 }, 00:21:54.181 { 00:21:54.181 "method": "bdev_nvme_set_hotplug", 00:21:54.181 "params": { 00:21:54.181 "period_us": 100000, 00:21:54.181 "enable": false 00:21:54.181 } 00:21:54.181 }, 00:21:54.181 { 00:21:54.181 "method": "bdev_enable_histogram", 00:21:54.181 "params": { 00:21:54.181 "name": "nvme0n1", 00:21:54.181 "enable": true 00:21:54.181 } 00:21:54.181 }, 00:21:54.181 { 00:21:54.181 "method": "bdev_wait_for_examine" 00:21:54.181 } 00:21:54.181 ] 00:21:54.181 }, 00:21:54.181 { 00:21:54.181 "subsystem": "nbd", 00:21:54.181 "config": [] 00:21:54.181 } 00:21:54.181 ] 00:21:54.181 }' 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 4161352 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4161352 ']' 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4161352 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4161352 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4161352' 00:21:54.181 killing process with pid 4161352 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4161352 00:21:54.181 Received shutdown signal, test time was about 1.000000 seconds 00:21:54.181 00:21:54.181 Latency(us) 00:21:54.181 [2024-11-19T10:16:02.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.181 [2024-11-19T10:16:02.533Z] =================================================================================================================== 00:21:54.181 [2024-11-19T10:16:02.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4161352 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 4161005 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4161005 ']' 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4161005 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.181 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4161005 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4161005' 00:21:54.443 killing process with pid 4161005 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4161005 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4161005 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.443 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:54.443 "subsystems": [ 00:21:54.443 { 00:21:54.443 "subsystem": "keyring", 00:21:54.443 "config": [ 00:21:54.443 { 00:21:54.443 "method": "keyring_file_add_key", 00:21:54.443 "params": { 00:21:54.443 "name": "key0", 00:21:54.443 "path": "/tmp/tmp.Kfv4rzTWpm" 00:21:54.443 } 00:21:54.443 } 00:21:54.443 ] 00:21:54.443 }, 00:21:54.443 { 00:21:54.443 "subsystem": "iobuf", 00:21:54.443 "config": [ 00:21:54.443 { 00:21:54.443 "method": "iobuf_set_options", 00:21:54.443 "params": { 00:21:54.443 "small_pool_count": 8192, 00:21:54.443 "large_pool_count": 1024, 00:21:54.443 "small_bufsize": 8192, 00:21:54.443 "large_bufsize": 135168, 00:21:54.443 "enable_numa": false 00:21:54.443 } 00:21:54.443 } 00:21:54.443 ] 00:21:54.443 }, 00:21:54.443 { 00:21:54.443 "subsystem": "sock", 00:21:54.443 "config": [ 00:21:54.443 { 00:21:54.443 "method": "sock_set_default_impl", 00:21:54.443 "params": { 00:21:54.443 "impl_name": "posix" 00:21:54.443 } 00:21:54.443 }, 00:21:54.443 { 00:21:54.443 "method": "sock_impl_set_options", 00:21:54.443 "params": { 00:21:54.443 "impl_name": "ssl", 00:21:54.443 "recv_buf_size": 4096, 00:21:54.443 "send_buf_size": 4096, 00:21:54.443 "enable_recv_pipe": true, 00:21:54.443 "enable_quickack": false, 00:21:54.443 "enable_placement_id": 0, 00:21:54.443 "enable_zerocopy_send_server": true, 00:21:54.443 "enable_zerocopy_send_client": false, 00:21:54.443 "zerocopy_threshold": 0, 00:21:54.443 "tls_version": 0, 00:21:54.444 "enable_ktls": false 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "sock_impl_set_options", 00:21:54.444 "params": { 00:21:54.444 "impl_name": "posix", 00:21:54.444 "recv_buf_size": 2097152, 00:21:54.444 "send_buf_size": 2097152, 00:21:54.444 "enable_recv_pipe": true, 00:21:54.444 "enable_quickack": false, 00:21:54.444 "enable_placement_id": 0, 00:21:54.444 "enable_zerocopy_send_server": true, 00:21:54.444 "enable_zerocopy_send_client": false, 00:21:54.444 "zerocopy_threshold": 0, 00:21:54.444 "tls_version": 0, 00:21:54.444 "enable_ktls": false 00:21:54.444 } 00:21:54.444 } 00:21:54.444 ] 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "subsystem": "vmd", 00:21:54.444 "config": [] 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "subsystem": "accel", 00:21:54.444 "config": [ 00:21:54.444 { 00:21:54.444 "method": "accel_set_options", 00:21:54.444 "params": { 00:21:54.444 "small_cache_size": 128, 00:21:54.444 "large_cache_size": 16, 00:21:54.444 "task_count": 2048, 00:21:54.444 "sequence_count": 2048, 00:21:54.444 "buf_count": 2048 00:21:54.444 } 00:21:54.444 } 00:21:54.444 ] 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "subsystem": "bdev", 00:21:54.444 "config": [ 00:21:54.444 { 00:21:54.444 "method": "bdev_set_options", 00:21:54.444 "params": { 00:21:54.444 "bdev_io_pool_size": 65535, 00:21:54.444 "bdev_io_cache_size": 256, 00:21:54.444 "bdev_auto_examine": true, 00:21:54.444 "iobuf_small_cache_size": 128, 00:21:54.444 "iobuf_large_cache_size": 16 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "bdev_raid_set_options", 00:21:54.444 "params": { 00:21:54.444 "process_window_size_kb": 1024, 00:21:54.444 "process_max_bandwidth_mb_sec": 0 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "bdev_iscsi_set_options", 00:21:54.444 "params": { 00:21:54.444 "timeout_sec": 30 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "bdev_nvme_set_options", 00:21:54.444 "params": { 00:21:54.444 "action_on_timeout": "none", 00:21:54.444 "timeout_us": 0, 00:21:54.444 "timeout_admin_us": 0, 00:21:54.444 "keep_alive_timeout_ms": 10000, 00:21:54.444 "arbitration_burst": 0, 00:21:54.444 "low_priority_weight": 0, 00:21:54.444 "medium_priority_weight": 0, 00:21:54.444 "high_priority_weight": 0, 00:21:54.444 "nvme_adminq_poll_period_us": 10000, 00:21:54.444 "nvme_ioq_poll_period_us": 0, 00:21:54.444 "io_queue_requests": 0, 00:21:54.444 "delay_cmd_submit": true, 00:21:54.444 "transport_retry_count": 4, 00:21:54.444 "bdev_retry_count": 3, 00:21:54.444 "transport_ack_timeout": 0, 00:21:54.444 "ctrlr_loss_timeout_sec": 0, 00:21:54.444 "reconnect_delay_sec": 0, 00:21:54.444 "fast_io_fail_timeout_sec": 0, 00:21:54.444 "disable_auto_failback": false, 00:21:54.444 "generate_uuids": false, 00:21:54.444 "transport_tos": 0, 00:21:54.444 "nvme_error_stat": false, 00:21:54.444 "rdma_srq_size": 0, 00:21:54.444 "io_path_stat": false, 00:21:54.444 "allow_accel_sequence": false, 00:21:54.444 "rdma_max_cq_size": 0, 00:21:54.444 "rdma_cm_event_timeout_ms": 0, 00:21:54.444 "dhchap_digests": [ 00:21:54.444 "sha256", 00:21:54.444 "sha384", 00:21:54.444 "sha512" 00:21:54.444 ], 00:21:54.444 "dhchap_dhgroups": [ 00:21:54.444 "null", 00:21:54.444 "ffdhe2048", 00:21:54.444 "ffdhe3072", 00:21:54.444 "ffdhe4096", 00:21:54.444 "ffdhe6144", 00:21:54.444 "ffdhe8192" 00:21:54.444 ] 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "bdev_nvme_set_hotplug", 00:21:54.444 "params": { 00:21:54.444 "period_us": 100000, 00:21:54.444 "enable": false 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "bdev_malloc_create", 00:21:54.444 "params": { 00:21:54.444 "name": "malloc0", 00:21:54.444 "num_blocks": 8192, 00:21:54.444 "block_size": 4096, 00:21:54.444 "physical_block_size": 4096, 00:21:54.444 "uuid": "4985c7c2-3ead-4eaf-b6bd-8ad6fe4cfb50", 00:21:54.444 "optimal_io_boundary": 0, 00:21:54.444 "md_size": 0, 00:21:54.444 "dif_type": 0, 00:21:54.444 "dif_is_head_of_md": false, 00:21:54.444 "dif_pi_format": 0 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "bdev_wait_for_examine" 00:21:54.444 } 00:21:54.444 ] 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "subsystem": "nbd", 00:21:54.444 "config": [] 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "subsystem": "scheduler", 00:21:54.444 "config": [ 00:21:54.444 { 00:21:54.444 "method": "framework_set_scheduler", 00:21:54.444 "params": { 00:21:54.444 "name": "static" 00:21:54.444 } 00:21:54.444 } 00:21:54.444 ] 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "subsystem": "nvmf", 00:21:54.444 "config": [ 00:21:54.444 { 00:21:54.444 "method": "nvmf_set_config", 00:21:54.444 "params": { 00:21:54.444 "discovery_filter": "match_any", 00:21:54.444 "admin_cmd_passthru": { 00:21:54.444 "identify_ctrlr": false 00:21:54.444 }, 00:21:54.444 "dhchap_digests": [ 00:21:54.444 "sha256", 00:21:54.444 "sha384", 00:21:54.444 "sha512" 00:21:54.444 ], 00:21:54.444 "dhchap_dhgroups": [ 00:21:54.444 "null", 00:21:54.444 "ffdhe2048", 00:21:54.444 "ffdhe3072", 00:21:54.444 "ffdhe4096", 00:21:54.444 "ffdhe6144", 00:21:54.444 "ffdhe8192" 00:21:54.444 ] 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "nvmf_set_max_subsystems", 00:21:54.444 "params": { 00:21:54.444 "max_subsystems": 1024 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "nvmf_set_crdt", 00:21:54.444 "params": { 00:21:54.444 "crdt1": 0, 00:21:54.444 "crdt2": 0, 00:21:54.444 "crdt3": 0 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "nvmf_create_transport", 00:21:54.444 "params": { 00:21:54.444 "trtype": "TCP", 00:21:54.444 "max_queue_depth": 128, 00:21:54.444 "max_io_qpairs_per_ctrlr": 127, 00:21:54.444 "in_capsule_data_size": 4096, 00:21:54.444 "max_io_size": 131072, 00:21:54.444 "io_unit_size": 131072, 00:21:54.444 "max_aq_depth": 128, 00:21:54.444 "num_shared_buffers": 511, 00:21:54.444 "buf_cache_size": 4294967295, 00:21:54.444 "dif_insert_or_strip": false, 00:21:54.444 "zcopy": false, 00:21:54.444 "c2h_success": false, 00:21:54.444 "sock_priority": 0, 00:21:54.444 "abort_timeout_sec": 1, 00:21:54.444 "ack_timeout": 0, 00:21:54.444 "data_wr_pool_size": 0 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "nvmf_create_subsystem", 00:21:54.444 "params": { 00:21:54.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.444 "allow_any_host": false, 00:21:54.444 "serial_number": "00000000000000000000", 00:21:54.444 "model_number": "SPDK bdev Controller", 00:21:54.444 "max_namespaces": 32, 00:21:54.444 "min_cntlid": 1, 00:21:54.444 "max_cntlid": 65519, 00:21:54.444 "ana_reporting": false 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "nvmf_subsystem_add_host", 00:21:54.444 "params": { 00:21:54.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.444 "host": "nqn.2016-06.io.spdk:host1", 00:21:54.444 "psk": "key0" 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "nvmf_subsystem_add_ns", 00:21:54.444 "params": { 00:21:54.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.444 "namespace": { 00:21:54.444 "nsid": 1, 00:21:54.444 "bdev_name": "malloc0", 00:21:54.444 "nguid": "4985C7C23EAD4EAFB6BD8AD6FE4CFB50", 00:21:54.444 "uuid": "4985c7c2-3ead-4eaf-b6bd-8ad6fe4cfb50", 00:21:54.444 "no_auto_visible": false 00:21:54.444 } 00:21:54.444 } 00:21:54.444 }, 00:21:54.444 { 00:21:54.444 "method": "nvmf_subsystem_add_listener", 00:21:54.444 "params": { 00:21:54.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.444 "listen_address": { 00:21:54.444 "trtype": "TCP", 00:21:54.444 "adrfam": "IPv4", 00:21:54.444 "traddr": "10.0.0.2", 00:21:54.444 "trsvcid": "4420" 00:21:54.444 }, 00:21:54.444 "secure_channel": false, 00:21:54.444 "sock_impl": "ssl" 00:21:54.444 } 00:21:54.444 } 00:21:54.444 ] 00:21:54.444 } 00:21:54.444 ] 00:21:54.444 }' 00:21:54.444 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.444 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4161942 00:21:54.444 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4161942 00:21:54.444 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:54.444 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4161942 ']' 00:21:54.444 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.444 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.444 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.445 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.445 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.445 [2024-11-19 11:16:02.752431] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:54.445 [2024-11-19 11:16:02.752489] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.705 [2024-11-19 11:16:02.837970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.705 [2024-11-19 11:16:02.873508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.705 [2024-11-19 11:16:02.873542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.705 [2024-11-19 11:16:02.873551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.705 [2024-11-19 11:16:02.873558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.705 [2024-11-19 11:16:02.873564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.705 [2024-11-19 11:16:02.874182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.966 [2024-11-19 11:16:03.072711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.966 [2024-11-19 11:16:03.104724] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:54.966 [2024-11-19 11:16:03.104950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.227 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.227 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:55.227 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.227 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.227 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=4162065 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 4162065 /var/tmp/bdevperf.sock 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4162065 ']' 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.489 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:55.489 "subsystems": [ 00:21:55.489 { 00:21:55.489 "subsystem": "keyring", 00:21:55.489 "config": [ 00:21:55.489 { 00:21:55.489 "method": "keyring_file_add_key", 00:21:55.489 "params": { 00:21:55.489 "name": "key0", 00:21:55.489 "path": "/tmp/tmp.Kfv4rzTWpm" 00:21:55.489 } 00:21:55.489 } 00:21:55.489 ] 00:21:55.489 }, 00:21:55.489 { 00:21:55.489 "subsystem": "iobuf", 00:21:55.489 "config": [ 00:21:55.489 { 00:21:55.489 "method": "iobuf_set_options", 00:21:55.489 "params": { 00:21:55.489 "small_pool_count": 8192, 00:21:55.489 "large_pool_count": 1024, 00:21:55.489 "small_bufsize": 8192, 00:21:55.489 "large_bufsize": 135168, 00:21:55.489 "enable_numa": false 00:21:55.489 } 00:21:55.489 } 00:21:55.489 ] 00:21:55.489 }, 00:21:55.489 { 00:21:55.489 "subsystem": "sock", 00:21:55.489 "config": [ 00:21:55.489 { 00:21:55.489 "method": "sock_set_default_impl", 00:21:55.489 "params": { 00:21:55.489 "impl_name": "posix" 00:21:55.489 } 00:21:55.489 }, 00:21:55.489 { 00:21:55.489 "method": "sock_impl_set_options", 00:21:55.489 "params": { 00:21:55.489 "impl_name": "ssl", 00:21:55.489 "recv_buf_size": 4096, 00:21:55.489 "send_buf_size": 4096, 00:21:55.489 "enable_recv_pipe": true, 00:21:55.489 "enable_quickack": false, 00:21:55.489 "enable_placement_id": 0, 00:21:55.489 "enable_zerocopy_send_server": true, 00:21:55.489 "enable_zerocopy_send_client": false, 00:21:55.489 "zerocopy_threshold": 0, 00:21:55.489 "tls_version": 0, 00:21:55.489 "enable_ktls": false 00:21:55.489 } 00:21:55.489 }, 00:21:55.489 { 00:21:55.489 "method": "sock_impl_set_options", 00:21:55.489 "params": { 00:21:55.489 "impl_name": "posix", 00:21:55.489 "recv_buf_size": 2097152, 00:21:55.489 "send_buf_size": 2097152, 00:21:55.489 "enable_recv_pipe": true, 00:21:55.489 "enable_quickack": false, 00:21:55.489 "enable_placement_id": 0, 00:21:55.489 "enable_zerocopy_send_server": true, 00:21:55.489 "enable_zerocopy_send_client": false, 00:21:55.489 "zerocopy_threshold": 0, 00:21:55.489 "tls_version": 0, 00:21:55.489 "enable_ktls": false 00:21:55.489 } 00:21:55.489 } 00:21:55.489 ] 00:21:55.489 }, 00:21:55.489 { 00:21:55.489 "subsystem": "vmd", 00:21:55.489 "config": [] 00:21:55.489 }, 00:21:55.489 { 00:21:55.489 "subsystem": "accel", 00:21:55.489 "config": [ 00:21:55.489 { 00:21:55.489 "method": "accel_set_options", 00:21:55.489 "params": { 00:21:55.489 "small_cache_size": 128, 00:21:55.489 "large_cache_size": 16, 00:21:55.489 "task_count": 2048, 00:21:55.489 "sequence_count": 2048, 00:21:55.490 "buf_count": 2048 00:21:55.490 } 00:21:55.490 } 00:21:55.490 ] 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "subsystem": "bdev", 00:21:55.490 "config": [ 00:21:55.490 { 00:21:55.490 "method": "bdev_set_options", 00:21:55.490 "params": { 00:21:55.490 "bdev_io_pool_size": 65535, 00:21:55.490 "bdev_io_cache_size": 256, 00:21:55.490 "bdev_auto_examine": true, 00:21:55.490 "iobuf_small_cache_size": 128, 00:21:55.490 "iobuf_large_cache_size": 16 00:21:55.490 } 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "method": "bdev_raid_set_options", 00:21:55.490 "params": { 00:21:55.490 "process_window_size_kb": 1024, 00:21:55.490 "process_max_bandwidth_mb_sec": 0 00:21:55.490 } 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "method": "bdev_iscsi_set_options", 00:21:55.490 "params": { 00:21:55.490 "timeout_sec": 30 00:21:55.490 } 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "method": "bdev_nvme_set_options", 00:21:55.490 "params": { 00:21:55.490 "action_on_timeout": "none", 00:21:55.490 "timeout_us": 0, 00:21:55.490 "timeout_admin_us": 0, 00:21:55.490 "keep_alive_timeout_ms": 10000, 00:21:55.490 "arbitration_burst": 0, 00:21:55.490 "low_priority_weight": 0, 00:21:55.490 "medium_priority_weight": 0, 00:21:55.490 "high_priority_weight": 0, 00:21:55.490 "nvme_adminq_poll_period_us": 10000, 00:21:55.490 "nvme_ioq_poll_period_us": 0, 00:21:55.490 "io_queue_requests": 512, 00:21:55.490 "delay_cmd_submit": true, 00:21:55.490 "transport_retry_count": 4, 00:21:55.490 "bdev_retry_count": 3, 00:21:55.490 "transport_ack_timeout": 0, 00:21:55.490 "ctrlr_loss_timeout_sec": 0, 00:21:55.490 "reconnect_delay_sec": 0, 00:21:55.490 "fast_io_fail_timeout_sec": 0, 00:21:55.490 "disable_auto_failback": false, 00:21:55.490 "generate_uuids": false, 00:21:55.490 "transport_tos": 0, 00:21:55.490 "nvme_error_stat": false, 00:21:55.490 "rdma_srq_size": 0, 00:21:55.490 "io_path_stat": false, 00:21:55.490 "allow_accel_sequence": false, 00:21:55.490 "rdma_max_cq_size": 0, 00:21:55.490 "rdma_cm_event_timeout_ms": 0, 00:21:55.490 "dhchap_digests": [ 00:21:55.490 "sha256", 00:21:55.490 "sha384", 00:21:55.490 "sha512" 00:21:55.490 ], 00:21:55.490 "dhchap_dhgroups": [ 00:21:55.490 "null", 00:21:55.490 "ffdhe2048", 00:21:55.490 "ffdhe3072", 00:21:55.490 "ffdhe4096", 00:21:55.490 "ffdhe6144", 00:21:55.490 "ffdhe8192" 00:21:55.490 ] 00:21:55.490 } 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "method": "bdev_nvme_attach_controller", 00:21:55.490 "params": { 00:21:55.490 "name": "nvme0", 00:21:55.490 "trtype": "TCP", 00:21:55.490 "adrfam": "IPv4", 00:21:55.490 "traddr": "10.0.0.2", 00:21:55.490 "trsvcid": "4420", 00:21:55.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.490 "prchk_reftag": false, 00:21:55.490 "prchk_guard": false, 00:21:55.490 "ctrlr_loss_timeout_sec": 0, 00:21:55.490 "reconnect_delay_sec": 0, 00:21:55.490 "fast_io_fail_timeout_sec": 0, 00:21:55.490 "psk": "key0", 00:21:55.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.490 "hdgst": false, 00:21:55.490 "ddgst": false, 00:21:55.490 "multipath": "multipath" 00:21:55.490 } 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "method": "bdev_nvme_set_hotplug", 00:21:55.490 "params": { 00:21:55.490 "period_us": 100000, 00:21:55.490 "enable": false 00:21:55.490 } 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "method": "bdev_enable_histogram", 00:21:55.490 "params": { 00:21:55.490 "name": "nvme0n1", 00:21:55.490 "enable": true 00:21:55.490 } 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "method": "bdev_wait_for_examine" 00:21:55.490 } 00:21:55.490 ] 00:21:55.490 }, 00:21:55.490 { 00:21:55.490 "subsystem": "nbd", 00:21:55.490 "config": [] 00:21:55.490 } 00:21:55.490 ] 00:21:55.490 }' 00:21:55.490 [2024-11-19 11:16:03.638478] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:21:55.490 [2024-11-19 11:16:03.638534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162065 ] 00:21:55.490 [2024-11-19 11:16:03.728451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.490 [2024-11-19 11:16:03.758272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.751 [2024-11-19 11:16:03.893632] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.323 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.323 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:56.323 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.323 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:56.323 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.323 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.323 Running I/O for 1 seconds... 00:21:57.711 5241.00 IOPS, 20.47 MiB/s 00:21:57.711 Latency(us) 00:21:57.711 [2024-11-19T10:16:06.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.711 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:57.711 Verification LBA range: start 0x0 length 0x2000 00:21:57.711 nvme0n1 : 1.02 5272.53 20.60 0.00 0.00 24085.18 6389.76 28617.39 00:21:57.711 [2024-11-19T10:16:06.063Z] =================================================================================================================== 00:21:57.711 [2024-11-19T10:16:06.063Z] Total : 5272.53 20.60 0.00 0.00 24085.18 6389.76 28617.39 00:21:57.711 { 00:21:57.711 "results": [ 00:21:57.711 { 00:21:57.711 "job": "nvme0n1", 00:21:57.711 "core_mask": "0x2", 00:21:57.711 "workload": "verify", 00:21:57.711 "status": "finished", 00:21:57.711 "verify_range": { 00:21:57.711 "start": 0, 00:21:57.711 "length": 8192 00:21:57.711 }, 00:21:57.711 "queue_depth": 128, 00:21:57.711 "io_size": 4096, 00:21:57.711 "runtime": 1.018297, 00:21:57.711 "iops": 5272.5285452083235, 00:21:57.711 "mibps": 20.595814629720014, 00:21:57.711 "io_failed": 0, 00:21:57.711 "io_timeout": 0, 00:21:57.711 "avg_latency_us": 24085.176780281865, 00:21:57.711 "min_latency_us": 6389.76, 00:21:57.711 "max_latency_us": 28617.386666666665 00:21:57.711 } 00:21:57.711 ], 00:21:57.711 "core_count": 1 00:21:57.711 } 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:57.711 nvmf_trace.0 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 4162065 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4162065 ']' 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4162065 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4162065 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4162065' 00:21:57.711 killing process with pid 4162065 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4162065 00:21:57.711 Received shutdown signal, test time was about 1.000000 seconds 00:21:57.711 00:21:57.711 Latency(us) 00:21:57.711 [2024-11-19T10:16:06.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.711 [2024-11-19T10:16:06.063Z] =================================================================================================================== 00:21:57.711 [2024-11-19T10:16:06.063Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4162065 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.711 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.711 rmmod nvme_tcp 00:21:57.711 rmmod nvme_fabrics 00:21:57.711 rmmod nvme_keyring 00:21:57.711 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.711 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:57.711 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:57.711 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 4161942 ']' 00:21:57.712 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 4161942 00:21:57.712 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4161942 ']' 00:21:57.712 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4161942 00:21:57.712 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:57.712 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.712 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4161942 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4161942' 00:21:57.973 killing process with pid 4161942 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4161942 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4161942 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:57.973 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.974 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.974 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.974 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.974 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.974 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.974 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.X0pbL80Bxj /tmp/tmp.RKADfgfsik /tmp/tmp.Kfv4rzTWpm 00:22:00.524 00:22:00.524 real 1m23.419s 00:22:00.524 user 2m5.494s 00:22:00.524 sys 0m27.889s 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.524 ************************************ 00:22:00.524 END TEST nvmf_tls 00:22:00.524 ************************************ 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.524 ************************************ 00:22:00.524 START TEST nvmf_fips 00:22:00.524 ************************************ 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:00.524 * Looking for test storage... 00:22:00.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.524 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:00.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.525 --rc genhtml_branch_coverage=1 00:22:00.525 --rc genhtml_function_coverage=1 00:22:00.525 --rc genhtml_legend=1 00:22:00.525 --rc geninfo_all_blocks=1 00:22:00.525 --rc geninfo_unexecuted_blocks=1 00:22:00.525 00:22:00.525 ' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:00.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.525 --rc genhtml_branch_coverage=1 00:22:00.525 --rc genhtml_function_coverage=1 00:22:00.525 --rc genhtml_legend=1 00:22:00.525 --rc geninfo_all_blocks=1 00:22:00.525 --rc geninfo_unexecuted_blocks=1 00:22:00.525 00:22:00.525 ' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:00.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.525 --rc genhtml_branch_coverage=1 00:22:00.525 --rc genhtml_function_coverage=1 00:22:00.525 --rc genhtml_legend=1 00:22:00.525 --rc geninfo_all_blocks=1 00:22:00.525 --rc geninfo_unexecuted_blocks=1 00:22:00.525 00:22:00.525 ' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:00.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.525 --rc genhtml_branch_coverage=1 00:22:00.525 --rc genhtml_function_coverage=1 00:22:00.525 --rc genhtml_legend=1 00:22:00.525 --rc geninfo_all_blocks=1 00:22:00.525 --rc geninfo_unexecuted_blocks=1 00:22:00.525 00:22:00.525 ' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:00.525 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:00.526 Error setting digest 00:22:00.526 40F2F5DA437F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:00.526 40F2F5DA437F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.526 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.670 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:08.671 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:08.671 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:08.671 Found net devices under 0000:31:00.0: cvl_0_0 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:08.671 Found net devices under 0000:31:00.1: cvl_0_1 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.671 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:22:08.932 00:22:08.932 --- 10.0.0.2 ping statistics --- 00:22:08.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.932 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:22:08.932 00:22:08.932 --- 10.0.0.1 ping statistics --- 00:22:08.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.932 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.932 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=4167442 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 4167442 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 4167442 ']' 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.933 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.194 [2024-11-19 11:16:17.318364] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:22:09.194 [2024-11-19 11:16:17.318438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.194 [2024-11-19 11:16:17.426389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.194 [2024-11-19 11:16:17.476540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.194 [2024-11-19 11:16:17.476592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.194 [2024-11-19 11:16:17.476601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.194 [2024-11-19 11:16:17.476608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.194 [2024-11-19 11:16:17.476615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.194 [2024-11-19 11:16:17.477410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.767 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.767 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:09.767 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.767 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.767 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.zDk 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.zDk 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.zDk 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.zDk 00:22:10.028 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:10.028 [2024-11-19 11:16:18.339362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.028 [2024-11-19 11:16:18.355341] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.028 [2024-11-19 11:16:18.355656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.290 malloc0 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=4167584 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 4167584 /var/tmp/bdevperf.sock 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 4167584 ']' 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.290 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:10.290 [2024-11-19 11:16:18.499554] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:22:10.290 [2024-11-19 11:16:18.499634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167584 ] 00:22:10.290 [2024-11-19 11:16:18.569801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.290 [2024-11-19 11:16:18.606257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.233 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.233 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:11.233 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.zDk 00:22:11.233 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:11.495 [2024-11-19 11:16:19.629055] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.495 TLSTESTn1 00:22:11.495 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.495 Running I/O for 10 seconds... 00:22:13.822 5361.00 IOPS, 20.94 MiB/s [2024-11-19T10:16:23.117Z] 5485.00 IOPS, 21.43 MiB/s [2024-11-19T10:16:24.059Z] 5092.00 IOPS, 19.89 MiB/s [2024-11-19T10:16:25.001Z] 5018.75 IOPS, 19.60 MiB/s [2024-11-19T10:16:26.089Z] 5146.80 IOPS, 20.10 MiB/s [2024-11-19T10:16:27.036Z] 5184.17 IOPS, 20.25 MiB/s [2024-11-19T10:16:27.978Z] 5241.43 IOPS, 20.47 MiB/s [2024-11-19T10:16:28.920Z] 5197.38 IOPS, 20.30 MiB/s [2024-11-19T10:16:29.861Z] 5264.11 IOPS, 20.56 MiB/s [2024-11-19T10:16:30.122Z] 5313.70 IOPS, 20.76 MiB/s 00:22:21.770 Latency(us) 00:22:21.770 [2024-11-19T10:16:30.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.770 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.770 Verification LBA range: start 0x0 length 0x2000 00:22:21.770 TLSTESTn1 : 10.05 5298.46 20.70 0.00 0.00 24089.51 6990.51 52647.25 00:22:21.770 [2024-11-19T10:16:30.122Z] =================================================================================================================== 00:22:21.770 [2024-11-19T10:16:30.122Z] Total : 5298.46 20.70 0.00 0.00 24089.51 6990.51 52647.25 00:22:21.770 { 00:22:21.770 "results": [ 00:22:21.770 { 00:22:21.770 "job": "TLSTESTn1", 00:22:21.770 "core_mask": "0x4", 00:22:21.770 "workload": "verify", 00:22:21.770 "status": "finished", 00:22:21.770 "verify_range": { 00:22:21.770 "start": 0, 00:22:21.770 "length": 8192 00:22:21.770 }, 00:22:21.770 "queue_depth": 128, 00:22:21.770 "io_size": 4096, 00:22:21.770 "runtime": 10.052731, 00:22:21.770 "iops": 5298.460686951636, 00:22:21.770 "mibps": 20.697112058404826, 00:22:21.770 "io_failed": 0, 00:22:21.770 "io_timeout": 0, 00:22:21.770 "avg_latency_us": 24089.509770701916, 00:22:21.770 "min_latency_us": 6990.506666666667, 00:22:21.770 "max_latency_us": 52647.253333333334 00:22:21.770 } 00:22:21.770 ], 00:22:21.770 "core_count": 1 00:22:21.770 } 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:21.770 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:21.770 nvmf_trace.0 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4167584 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 4167584 ']' 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 4167584 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4167584 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4167584' 00:22:21.770 killing process with pid 4167584 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 4167584 00:22:21.770 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.770 00:22:21.770 Latency(us) 00:22:21.770 [2024-11-19T10:16:30.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.770 [2024-11-19T10:16:30.122Z] =================================================================================================================== 00:22:21.770 [2024-11-19T10:16:30.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.770 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 4167584 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.033 rmmod nvme_tcp 00:22:22.033 rmmod nvme_fabrics 00:22:22.033 rmmod nvme_keyring 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 4167442 ']' 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 4167442 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 4167442 ']' 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 4167442 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4167442 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4167442' 00:22:22.033 killing process with pid 4167442 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 4167442 00:22:22.033 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 4167442 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.295 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.zDk 00:22:24.206 00:22:24.206 real 0m24.104s 00:22:24.206 user 0m25.083s 00:22:24.206 sys 0m10.327s 00:22:24.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.206 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:24.206 ************************************ 00:22:24.206 END TEST nvmf_fips 00:22:24.206 ************************************ 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:24.467 ************************************ 00:22:24.467 START TEST nvmf_control_msg_list 00:22:24.467 ************************************ 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:24.467 * Looking for test storage... 00:22:24.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.467 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:24.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.468 --rc genhtml_branch_coverage=1 00:22:24.468 --rc genhtml_function_coverage=1 00:22:24.468 --rc genhtml_legend=1 00:22:24.468 --rc geninfo_all_blocks=1 00:22:24.468 --rc geninfo_unexecuted_blocks=1 00:22:24.468 00:22:24.468 ' 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:24.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.468 --rc genhtml_branch_coverage=1 00:22:24.468 --rc genhtml_function_coverage=1 00:22:24.468 --rc genhtml_legend=1 00:22:24.468 --rc geninfo_all_blocks=1 00:22:24.468 --rc geninfo_unexecuted_blocks=1 00:22:24.468 00:22:24.468 ' 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:24.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.468 --rc genhtml_branch_coverage=1 00:22:24.468 --rc genhtml_function_coverage=1 00:22:24.468 --rc genhtml_legend=1 00:22:24.468 --rc geninfo_all_blocks=1 00:22:24.468 --rc geninfo_unexecuted_blocks=1 00:22:24.468 00:22:24.468 ' 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:24.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.468 --rc genhtml_branch_coverage=1 00:22:24.468 --rc genhtml_function_coverage=1 00:22:24.468 --rc genhtml_legend=1 00:22:24.468 --rc geninfo_all_blocks=1 00:22:24.468 --rc geninfo_unexecuted_blocks=1 00:22:24.468 00:22:24.468 ' 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.468 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.730 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.731 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:32.876 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:32.876 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:32.876 Found net devices under 0000:31:00.0: cvl_0_0 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:32.876 Found net devices under 0000:31:00.1: cvl_0_1 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.876 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:22:32.876 00:22:32.876 --- 10.0.0.2 ping statistics --- 00:22:32.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.876 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:22:32.876 00:22:32.876 --- 10.0.0.1 ping statistics --- 00:22:32.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.876 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.876 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=4174703 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 4174703 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 4174703 ']' 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.138 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:33.138 [2024-11-19 11:16:41.323080] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:22:33.138 [2024-11-19 11:16:41.323136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.138 [2024-11-19 11:16:41.411395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.138 [2024-11-19 11:16:41.449761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.138 [2024-11-19 11:16:41.449797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.138 [2024-11-19 11:16:41.449805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.138 [2024-11-19 11:16:41.449816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.138 [2024-11-19 11:16:41.449822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.138 [2024-11-19 11:16:41.450425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.080 [2024-11-19 11:16:42.155526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.080 Malloc0 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.080 [2024-11-19 11:16:42.206410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=4174869 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=4174870 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=4174871 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 4174869 00:22:34.080 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.080 [2024-11-19 11:16:42.276842] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:34.080 [2024-11-19 11:16:42.296768] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:34.080 [2024-11-19 11:16:42.306791] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:35.024 Initializing NVMe Controllers 00:22:35.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:35.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:35.024 Initialization complete. Launching workers. 00:22:35.024 ======================================================== 00:22:35.024 Latency(us) 00:22:35.024 Device Information : IOPS MiB/s Average min max 00:22:35.024 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1583.00 6.18 631.66 308.44 858.62 00:22:35.024 ======================================================== 00:22:35.024 Total : 1583.00 6.18 631.66 308.44 858.62 00:22:35.024 00:22:35.284 Initializing NVMe Controllers 00:22:35.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:35.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:35.284 Initialization complete. Launching workers. 00:22:35.284 ======================================================== 00:22:35.284 Latency(us) 00:22:35.284 Device Information : IOPS MiB/s Average min max 00:22:35.284 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40897.99 40751.02 41005.64 00:22:35.284 ======================================================== 00:22:35.284 Total : 25.00 0.10 40897.99 40751.02 41005.64 00:22:35.284 00:22:35.284 Initializing NVMe Controllers 00:22:35.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:35.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:35.284 Initialization complete. Launching workers. 00:22:35.284 ======================================================== 00:22:35.284 Latency(us) 00:22:35.284 Device Information : IOPS MiB/s Average min max 00:22:35.284 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40921.25 40778.37 41407.13 00:22:35.284 ======================================================== 00:22:35.284 Total : 25.00 0.10 40921.25 40778.37 41407.13 00:22:35.284 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 4174870 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 4174871 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.284 rmmod nvme_tcp 00:22:35.284 rmmod nvme_fabrics 00:22:35.284 rmmod nvme_keyring 00:22:35.284 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 4174703 ']' 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 4174703 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 4174703 ']' 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 4174703 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4174703 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4174703' 00:22:35.545 killing process with pid 4174703 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 4174703 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 4174703 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.545 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.092 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:38.092 00:22:38.092 real 0m13.313s 00:22:38.092 user 0m8.183s 00:22:38.092 sys 0m7.193s 00:22:38.092 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.092 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:38.092 ************************************ 00:22:38.092 END TEST nvmf_control_msg_list 00:22:38.092 ************************************ 00:22:38.092 11:16:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:38.092 11:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.092 11:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.092 11:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.092 ************************************ 00:22:38.092 START TEST nvmf_wait_for_buf 00:22:38.092 ************************************ 00:22:38.092 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:38.092 * Looking for test storage... 00:22:38.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.092 --rc genhtml_branch_coverage=1 00:22:38.092 --rc genhtml_function_coverage=1 00:22:38.092 --rc genhtml_legend=1 00:22:38.092 --rc geninfo_all_blocks=1 00:22:38.092 --rc geninfo_unexecuted_blocks=1 00:22:38.092 00:22:38.092 ' 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.092 --rc genhtml_branch_coverage=1 00:22:38.092 --rc genhtml_function_coverage=1 00:22:38.092 --rc genhtml_legend=1 00:22:38.092 --rc geninfo_all_blocks=1 00:22:38.092 --rc geninfo_unexecuted_blocks=1 00:22:38.092 00:22:38.092 ' 00:22:38.092 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.093 --rc genhtml_branch_coverage=1 00:22:38.093 --rc genhtml_function_coverage=1 00:22:38.093 --rc genhtml_legend=1 00:22:38.093 --rc geninfo_all_blocks=1 00:22:38.093 --rc geninfo_unexecuted_blocks=1 00:22:38.093 00:22:38.093 ' 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.093 --rc genhtml_branch_coverage=1 00:22:38.093 --rc genhtml_function_coverage=1 00:22:38.093 --rc genhtml_legend=1 00:22:38.093 --rc geninfo_all_blocks=1 00:22:38.093 --rc geninfo_unexecuted_blocks=1 00:22:38.093 00:22:38.093 ' 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.093 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:46.245 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:46.245 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:46.245 Found net devices under 0000:31:00.0: cvl_0_0 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:46.245 Found net devices under 0000:31:00.1: cvl_0_1 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.245 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:22:46.246 00:22:46.246 --- 10.0.0.2 ping statistics --- 00:22:46.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.246 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:22:46.246 00:22:46.246 --- 10.0.0.1 ping statistics --- 00:22:46.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.246 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=4179889 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 4179889 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 4179889 ']' 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.246 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:46.246 [2024-11-19 11:16:54.513959] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:22:46.246 [2024-11-19 11:16:54.514011] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.506 [2024-11-19 11:16:54.601754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.506 [2024-11-19 11:16:54.637370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.506 [2024-11-19 11:16:54.637407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.506 [2024-11-19 11:16:54.637415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.506 [2024-11-19 11:16:54.637421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.506 [2024-11-19 11:16:54.637427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.507 [2024-11-19 11:16:54.638015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.078 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.341 Malloc0 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.341 [2024-11-19 11:16:55.445195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:47.341 [2024-11-19 11:16:55.481428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.341 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:47.341 [2024-11-19 11:16:55.583960] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:48.726 Initializing NVMe Controllers 00:22:48.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:48.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:48.727 Initialization complete. Launching workers. 00:22:48.727 ======================================================== 00:22:48.727 Latency(us) 00:22:48.727 Device Information : IOPS MiB/s Average min max 00:22:48.727 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 166002.01 47875.75 191555.74 00:22:48.727 ======================================================== 00:22:48.727 Total : 25.00 3.12 166002.01 47875.75 191555.74 00:22:48.727 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.727 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.727 rmmod nvme_tcp 00:22:48.727 rmmod nvme_fabrics 00:22:48.988 rmmod nvme_keyring 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 4179889 ']' 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 4179889 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 4179889 ']' 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 4179889 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4179889 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4179889' 00:22:48.988 killing process with pid 4179889 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 4179889 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 4179889 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.988 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.534 00:22:51.534 real 0m13.398s 00:22:51.534 user 0m5.339s 00:22:51.534 sys 0m6.609s 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:51.534 ************************************ 00:22:51.534 END TEST nvmf_wait_for_buf 00:22:51.534 ************************************ 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.534 11:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:59.678 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:59.678 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:59.678 Found net devices under 0000:31:00.0: cvl_0_0 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:59.678 Found net devices under 0000:31:00.1: cvl_0_1 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:59.678 ************************************ 00:22:59.678 START TEST nvmf_perf_adq 00:22:59.678 ************************************ 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:59.678 * Looking for test storage... 00:22:59.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.678 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:59.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.679 --rc genhtml_branch_coverage=1 00:22:59.679 --rc genhtml_function_coverage=1 00:22:59.679 --rc genhtml_legend=1 00:22:59.679 --rc geninfo_all_blocks=1 00:22:59.679 --rc geninfo_unexecuted_blocks=1 00:22:59.679 00:22:59.679 ' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:59.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.679 --rc genhtml_branch_coverage=1 00:22:59.679 --rc genhtml_function_coverage=1 00:22:59.679 --rc genhtml_legend=1 00:22:59.679 --rc geninfo_all_blocks=1 00:22:59.679 --rc geninfo_unexecuted_blocks=1 00:22:59.679 00:22:59.679 ' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:59.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.679 --rc genhtml_branch_coverage=1 00:22:59.679 --rc genhtml_function_coverage=1 00:22:59.679 --rc genhtml_legend=1 00:22:59.679 --rc geninfo_all_blocks=1 00:22:59.679 --rc geninfo_unexecuted_blocks=1 00:22:59.679 00:22:59.679 ' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:59.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.679 --rc genhtml_branch_coverage=1 00:22:59.679 --rc genhtml_function_coverage=1 00:22:59.679 --rc genhtml_legend=1 00:22:59.679 --rc geninfo_all_blocks=1 00:22:59.679 --rc geninfo_unexecuted_blocks=1 00:22:59.679 00:22:59.679 ' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.679 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.819 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:07.820 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:07.820 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:07.820 Found net devices under 0000:31:00.0: cvl_0_0 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:07.820 Found net devices under 0000:31:00.1: cvl_0_1 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:07.820 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:08.762 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:11.306 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:16.663 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:16.663 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:16.663 Found net devices under 0000:31:00.0: cvl_0_0 00:23:16.663 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:16.664 Found net devices under 0000:31:00.1: cvl_0_1 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:16.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:23:16.664 00:23:16.664 --- 10.0.0.2 ping statistics --- 00:23:16.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.664 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:23:16.664 00:23:16.664 --- 10.0.0.1 ping statistics --- 00:23:16.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.664 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4191171 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4191171 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4191171 ']' 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.664 [2024-11-19 11:17:24.534543] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:23:16.664 [2024-11-19 11:17:24.534584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.664 [2024-11-19 11:17:24.608961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.664 [2024-11-19 11:17:24.646001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.664 [2024-11-19 11:17:24.646030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.664 [2024-11-19 11:17:24.646038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.664 [2024-11-19 11:17:24.646045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.664 [2024-11-19 11:17:24.646051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.664 [2024-11-19 11:17:24.647792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.664 [2024-11-19 11:17:24.648046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.664 [2024-11-19 11:17:24.648046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.664 [2024-11-19 11:17:24.647906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.664 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.665 [2024-11-19 11:17:24.868053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.665 Malloc1 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.665 [2024-11-19 11:17:24.939233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=4191220 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:16.665 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:19.212 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:19.212 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.212 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.212 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.212 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:19.212 "tick_rate": 2400000000, 00:23:19.212 "poll_groups": [ 00:23:19.212 { 00:23:19.212 "name": "nvmf_tgt_poll_group_000", 00:23:19.212 "admin_qpairs": 1, 00:23:19.212 "io_qpairs": 1, 00:23:19.212 "current_admin_qpairs": 1, 00:23:19.212 "current_io_qpairs": 1, 00:23:19.212 "pending_bdev_io": 0, 00:23:19.212 "completed_nvme_io": 18916, 00:23:19.212 "transports": [ 00:23:19.212 { 00:23:19.212 "trtype": "TCP" 00:23:19.212 } 00:23:19.212 ] 00:23:19.212 }, 00:23:19.212 { 00:23:19.212 "name": "nvmf_tgt_poll_group_001", 00:23:19.212 "admin_qpairs": 0, 00:23:19.212 "io_qpairs": 1, 00:23:19.212 "current_admin_qpairs": 0, 00:23:19.212 "current_io_qpairs": 1, 00:23:19.212 "pending_bdev_io": 0, 00:23:19.212 "completed_nvme_io": 28006, 00:23:19.212 "transports": [ 00:23:19.212 { 00:23:19.212 "trtype": "TCP" 00:23:19.212 } 00:23:19.212 ] 00:23:19.212 }, 00:23:19.212 { 00:23:19.212 "name": "nvmf_tgt_poll_group_002", 00:23:19.212 "admin_qpairs": 0, 00:23:19.212 "io_qpairs": 1, 00:23:19.212 "current_admin_qpairs": 0, 00:23:19.212 "current_io_qpairs": 1, 00:23:19.212 "pending_bdev_io": 0, 00:23:19.212 "completed_nvme_io": 19021, 00:23:19.212 "transports": [ 00:23:19.212 { 00:23:19.212 "trtype": "TCP" 00:23:19.212 } 00:23:19.212 ] 00:23:19.212 }, 00:23:19.212 { 00:23:19.212 "name": "nvmf_tgt_poll_group_003", 00:23:19.212 "admin_qpairs": 0, 00:23:19.212 "io_qpairs": 1, 00:23:19.212 "current_admin_qpairs": 0, 00:23:19.212 "current_io_qpairs": 1, 00:23:19.212 "pending_bdev_io": 0, 00:23:19.212 "completed_nvme_io": 19160, 00:23:19.212 "transports": [ 00:23:19.212 { 00:23:19.212 "trtype": "TCP" 00:23:19.212 } 00:23:19.212 ] 00:23:19.212 } 00:23:19.212 ] 00:23:19.212 }' 00:23:19.212 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:19.212 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:19.212 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:19.212 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:19.212 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 4191220 00:23:27.353 Initializing NVMe Controllers 00:23:27.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:27.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:27.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:27.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:27.353 Initialization complete. Launching workers. 00:23:27.353 ======================================================== 00:23:27.353 Latency(us) 00:23:27.353 Device Information : IOPS MiB/s Average min max 00:23:27.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10985.40 42.91 5825.93 1951.23 10467.74 00:23:27.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14676.70 57.33 4360.10 1339.33 9455.84 00:23:27.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13163.70 51.42 4861.28 1379.57 11573.54 00:23:27.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12944.20 50.56 4944.79 1008.38 10864.27 00:23:27.353 ======================================================== 00:23:27.353 Total : 51770.00 202.23 4944.77 1008.38 11573.54 00:23:27.353 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.353 rmmod nvme_tcp 00:23:27.353 rmmod nvme_fabrics 00:23:27.353 rmmod nvme_keyring 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4191171 ']' 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4191171 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4191171 ']' 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4191171 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.353 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4191171 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4191171' 00:23:27.354 killing process with pid 4191171 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4191171 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4191171 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.354 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.270 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.270 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:29.270 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:29.270 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:30.655 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:33.199 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:38.504 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.504 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:38.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.505 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:38.505 Found net devices under 0000:31:00.0: cvl_0_0 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:38.505 Found net devices under 0000:31:00.1: cvl_0_1 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:23:38.505 00:23:38.505 --- 10.0.0.2 ping statistics --- 00:23:38.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.505 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:23:38.505 00:23:38.505 --- 10.0.0.1 ping statistics --- 00:23:38.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.505 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:38.505 net.core.busy_poll = 1 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:38.505 net.core.busy_read = 1 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:38.505 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2585 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2585 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2585 ']' 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.506 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:38.506 [2024-11-19 11:17:46.734350] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:23:38.506 [2024-11-19 11:17:46.734420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.506 [2024-11-19 11:17:46.826914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.767 [2024-11-19 11:17:46.868568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.767 [2024-11-19 11:17:46.868607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.767 [2024-11-19 11:17:46.868616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.767 [2024-11-19 11:17:46.868622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.767 [2024-11-19 11:17:46.868628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.767 [2024-11-19 11:17:46.870235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.767 [2024-11-19 11:17:46.870357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.767 [2024-11-19 11:17:46.870511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.767 [2024-11-19 11:17:46.870512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.339 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.602 [2024-11-19 11:17:47.712440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.602 Malloc1 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.602 [2024-11-19 11:17:47.782280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2711 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:39.602 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:41.520 "tick_rate": 2400000000, 00:23:41.520 "poll_groups": [ 00:23:41.520 { 00:23:41.520 "name": "nvmf_tgt_poll_group_000", 00:23:41.520 "admin_qpairs": 1, 00:23:41.520 "io_qpairs": 2, 00:23:41.520 "current_admin_qpairs": 1, 00:23:41.520 "current_io_qpairs": 2, 00:23:41.520 "pending_bdev_io": 0, 00:23:41.520 "completed_nvme_io": 27541, 00:23:41.520 "transports": [ 00:23:41.520 { 00:23:41.520 "trtype": "TCP" 00:23:41.520 } 00:23:41.520 ] 00:23:41.520 }, 00:23:41.520 { 00:23:41.520 "name": "nvmf_tgt_poll_group_001", 00:23:41.520 "admin_qpairs": 0, 00:23:41.520 "io_qpairs": 2, 00:23:41.520 "current_admin_qpairs": 0, 00:23:41.520 "current_io_qpairs": 2, 00:23:41.520 "pending_bdev_io": 0, 00:23:41.520 "completed_nvme_io": 40241, 00:23:41.520 "transports": [ 00:23:41.520 { 00:23:41.520 "trtype": "TCP" 00:23:41.520 } 00:23:41.520 ] 00:23:41.520 }, 00:23:41.520 { 00:23:41.520 "name": "nvmf_tgt_poll_group_002", 00:23:41.520 "admin_qpairs": 0, 00:23:41.520 "io_qpairs": 0, 00:23:41.520 "current_admin_qpairs": 0, 00:23:41.520 "current_io_qpairs": 0, 00:23:41.520 "pending_bdev_io": 0, 00:23:41.520 "completed_nvme_io": 0, 00:23:41.520 "transports": [ 00:23:41.520 { 00:23:41.520 "trtype": "TCP" 00:23:41.520 } 00:23:41.520 ] 00:23:41.520 }, 00:23:41.520 { 00:23:41.520 "name": "nvmf_tgt_poll_group_003", 00:23:41.520 "admin_qpairs": 0, 00:23:41.520 "io_qpairs": 0, 00:23:41.520 "current_admin_qpairs": 0, 00:23:41.520 "current_io_qpairs": 0, 00:23:41.520 "pending_bdev_io": 0, 00:23:41.520 "completed_nvme_io": 0, 00:23:41.520 "transports": [ 00:23:41.520 { 00:23:41.520 "trtype": "TCP" 00:23:41.520 } 00:23:41.520 ] 00:23:41.520 } 00:23:41.520 ] 00:23:41.520 }' 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:41.520 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2711 00:23:49.689 Initializing NVMe Controllers 00:23:49.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:49.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:49.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:49.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:49.689 Initialization complete. Launching workers. 00:23:49.689 ======================================================== 00:23:49.689 Latency(us) 00:23:49.689 Device Information : IOPS MiB/s Average min max 00:23:49.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9496.00 37.09 6740.36 1005.56 54201.67 00:23:49.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12694.60 49.59 5041.73 1194.66 49745.44 00:23:49.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8333.90 32.55 7703.82 976.60 49986.96 00:23:49.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8871.10 34.65 7240.80 1249.89 52464.64 00:23:49.689 ======================================================== 00:23:49.689 Total : 39395.60 153.89 6509.51 976.60 54201.67 00:23:49.689 00:23:49.689 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:49.689 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.689 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:49.689 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.689 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:49.689 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.689 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.689 rmmod nvme_tcp 00:23:49.951 rmmod nvme_fabrics 00:23:49.951 rmmod nvme_keyring 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2585 ']' 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2585 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2585 ']' 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2585 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2585 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2585' 00:23:49.951 killing process with pid 2585 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2585 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2585 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.951 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:53.254 00:23:53.254 real 0m54.060s 00:23:53.254 user 2m47.680s 00:23:53.254 sys 0m11.981s 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.254 ************************************ 00:23:53.254 END TEST nvmf_perf_adq 00:23:53.254 ************************************ 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:53.254 ************************************ 00:23:53.254 START TEST nvmf_shutdown 00:23:53.254 ************************************ 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:53.254 * Looking for test storage... 00:23:53.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:53.254 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:53.516 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:53.516 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.516 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.516 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:53.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.517 --rc genhtml_branch_coverage=1 00:23:53.517 --rc genhtml_function_coverage=1 00:23:53.517 --rc genhtml_legend=1 00:23:53.517 --rc geninfo_all_blocks=1 00:23:53.517 --rc geninfo_unexecuted_blocks=1 00:23:53.517 00:23:53.517 ' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:53.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.517 --rc genhtml_branch_coverage=1 00:23:53.517 --rc genhtml_function_coverage=1 00:23:53.517 --rc genhtml_legend=1 00:23:53.517 --rc geninfo_all_blocks=1 00:23:53.517 --rc geninfo_unexecuted_blocks=1 00:23:53.517 00:23:53.517 ' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:53.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.517 --rc genhtml_branch_coverage=1 00:23:53.517 --rc genhtml_function_coverage=1 00:23:53.517 --rc genhtml_legend=1 00:23:53.517 --rc geninfo_all_blocks=1 00:23:53.517 --rc geninfo_unexecuted_blocks=1 00:23:53.517 00:23:53.517 ' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:53.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.517 --rc genhtml_branch_coverage=1 00:23:53.517 --rc genhtml_function_coverage=1 00:23:53.517 --rc genhtml_legend=1 00:23:53.517 --rc geninfo_all_blocks=1 00:23:53.517 --rc geninfo_unexecuted_blocks=1 00:23:53.517 00:23:53.517 ' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.517 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:53.517 ************************************ 00:23:53.517 START TEST nvmf_shutdown_tc1 00:23:53.517 ************************************ 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.518 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.529 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.529 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.529 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.529 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.529 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.529 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.529 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.529 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:03.530 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:03.530 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:03.530 Found net devices under 0000:31:00.0: cvl_0_0 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:03.530 Found net devices under 0000:31:00.1: cvl_0_1 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:24:03.530 00:24:03.530 --- 10.0.0.2 ping statistics --- 00:24:03.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.530 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:24:03.530 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:24:03.531 00:24:03.531 --- 10.0.0.1 ping statistics --- 00:24:03.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.531 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=10376 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 10376 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 10376 ']' 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.531 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 [2024-11-19 11:18:10.464603] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:03.531 [2024-11-19 11:18:10.464676] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.531 [2024-11-19 11:18:10.575005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.531 [2024-11-19 11:18:10.627374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.531 [2024-11-19 11:18:10.627431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.531 [2024-11-19 11:18:10.627440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.531 [2024-11-19 11:18:10.627447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.531 [2024-11-19 11:18:10.627454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.531 [2024-11-19 11:18:10.629516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.531 [2024-11-19 11:18:10.629681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.531 [2024-11-19 11:18:10.629849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.531 [2024-11-19 11:18:10.629848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 [2024-11-19 11:18:11.311851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 Malloc1 00:24:03.531 [2024-11-19 11:18:11.431174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.531 Malloc2 00:24:03.531 Malloc3 00:24:03.531 Malloc4 00:24:03.531 Malloc5 00:24:03.531 Malloc6 00:24:03.531 Malloc7 00:24:03.531 Malloc8 00:24:03.531 Malloc9 00:24:03.531 Malloc10 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=10801 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 10801 /var/tmp/bdevperf.sock 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 10801 ']' 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.531 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.532 { 00:24:03.532 "params": { 00:24:03.532 "name": "Nvme$subsystem", 00:24:03.532 "trtype": "$TEST_TRANSPORT", 00:24:03.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.532 "adrfam": "ipv4", 00:24:03.532 "trsvcid": "$NVMF_PORT", 00:24:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.532 "hdgst": ${hdgst:-false}, 00:24:03.532 "ddgst": ${ddgst:-false} 00:24:03.532 }, 00:24:03.532 "method": "bdev_nvme_attach_controller" 00:24:03.532 } 00:24:03.532 EOF 00:24:03.532 )") 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.532 { 00:24:03.532 "params": { 00:24:03.532 "name": "Nvme$subsystem", 00:24:03.532 "trtype": "$TEST_TRANSPORT", 00:24:03.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.532 "adrfam": "ipv4", 00:24:03.532 "trsvcid": "$NVMF_PORT", 00:24:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.532 "hdgst": ${hdgst:-false}, 00:24:03.532 "ddgst": ${ddgst:-false} 00:24:03.532 }, 00:24:03.532 "method": "bdev_nvme_attach_controller" 00:24:03.532 } 00:24:03.532 EOF 00:24:03.532 )") 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.532 { 00:24:03.532 "params": { 00:24:03.532 "name": "Nvme$subsystem", 00:24:03.532 "trtype": "$TEST_TRANSPORT", 00:24:03.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.532 "adrfam": "ipv4", 00:24:03.532 "trsvcid": "$NVMF_PORT", 00:24:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.532 "hdgst": ${hdgst:-false}, 00:24:03.532 "ddgst": ${ddgst:-false} 00:24:03.532 }, 00:24:03.532 "method": "bdev_nvme_attach_controller" 00:24:03.532 } 00:24:03.532 EOF 00:24:03.532 )") 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.532 { 00:24:03.532 "params": { 00:24:03.532 "name": "Nvme$subsystem", 00:24:03.532 "trtype": "$TEST_TRANSPORT", 00:24:03.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.532 "adrfam": "ipv4", 00:24:03.532 "trsvcid": "$NVMF_PORT", 00:24:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.532 "hdgst": ${hdgst:-false}, 00:24:03.532 "ddgst": ${ddgst:-false} 00:24:03.532 }, 00:24:03.532 "method": "bdev_nvme_attach_controller" 00:24:03.532 } 00:24:03.532 EOF 00:24:03.532 )") 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.532 { 00:24:03.532 "params": { 00:24:03.532 "name": "Nvme$subsystem", 00:24:03.532 "trtype": "$TEST_TRANSPORT", 00:24:03.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.532 "adrfam": "ipv4", 00:24:03.532 "trsvcid": "$NVMF_PORT", 00:24:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.532 "hdgst": ${hdgst:-false}, 00:24:03.532 "ddgst": ${ddgst:-false} 00:24:03.532 }, 00:24:03.532 "method": "bdev_nvme_attach_controller" 00:24:03.532 } 00:24:03.532 EOF 00:24:03.532 )") 00:24:03.532 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.794 { 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme$subsystem", 00:24:03.794 "trtype": "$TEST_TRANSPORT", 00:24:03.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "$NVMF_PORT", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.794 "hdgst": ${hdgst:-false}, 00:24:03.794 "ddgst": ${ddgst:-false} 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 } 00:24:03.794 EOF 00:24:03.794 )") 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.794 [2024-11-19 11:18:11.886343] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:03.794 [2024-11-19 11:18:11.886396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.794 { 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme$subsystem", 00:24:03.794 "trtype": "$TEST_TRANSPORT", 00:24:03.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "$NVMF_PORT", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.794 "hdgst": ${hdgst:-false}, 00:24:03.794 "ddgst": ${ddgst:-false} 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 } 00:24:03.794 EOF 00:24:03.794 )") 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.794 { 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme$subsystem", 00:24:03.794 "trtype": "$TEST_TRANSPORT", 00:24:03.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "$NVMF_PORT", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.794 "hdgst": ${hdgst:-false}, 00:24:03.794 "ddgst": ${ddgst:-false} 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 } 00:24:03.794 EOF 00:24:03.794 )") 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.794 { 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme$subsystem", 00:24:03.794 "trtype": "$TEST_TRANSPORT", 00:24:03.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "$NVMF_PORT", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.794 "hdgst": ${hdgst:-false}, 00:24:03.794 "ddgst": ${ddgst:-false} 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 } 00:24:03.794 EOF 00:24:03.794 )") 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.794 { 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme$subsystem", 00:24:03.794 "trtype": "$TEST_TRANSPORT", 00:24:03.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "$NVMF_PORT", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.794 "hdgst": ${hdgst:-false}, 00:24:03.794 "ddgst": ${ddgst:-false} 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 } 00:24:03.794 EOF 00:24:03.794 )") 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:03.794 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme1", 00:24:03.794 "trtype": "tcp", 00:24:03.794 "traddr": "10.0.0.2", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "4420", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.794 "hdgst": false, 00:24:03.794 "ddgst": false 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 },{ 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme2", 00:24:03.794 "trtype": "tcp", 00:24:03.794 "traddr": "10.0.0.2", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "4420", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:03.794 "hdgst": false, 00:24:03.794 "ddgst": false 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 },{ 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme3", 00:24:03.794 "trtype": "tcp", 00:24:03.794 "traddr": "10.0.0.2", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "4420", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:03.794 "hdgst": false, 00:24:03.794 "ddgst": false 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 },{ 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme4", 00:24:03.794 "trtype": "tcp", 00:24:03.794 "traddr": "10.0.0.2", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "4420", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:03.794 "hdgst": false, 00:24:03.794 "ddgst": false 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 },{ 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme5", 00:24:03.794 "trtype": "tcp", 00:24:03.794 "traddr": "10.0.0.2", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "4420", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:03.794 "hdgst": false, 00:24:03.794 "ddgst": false 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 },{ 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme6", 00:24:03.794 "trtype": "tcp", 00:24:03.794 "traddr": "10.0.0.2", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "4420", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:03.794 "hdgst": false, 00:24:03.794 "ddgst": false 00:24:03.794 }, 00:24:03.794 "method": "bdev_nvme_attach_controller" 00:24:03.794 },{ 00:24:03.794 "params": { 00:24:03.794 "name": "Nvme7", 00:24:03.794 "trtype": "tcp", 00:24:03.794 "traddr": "10.0.0.2", 00:24:03.794 "adrfam": "ipv4", 00:24:03.794 "trsvcid": "4420", 00:24:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:03.794 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:03.795 "hdgst": false, 00:24:03.795 "ddgst": false 00:24:03.795 }, 00:24:03.795 "method": "bdev_nvme_attach_controller" 00:24:03.795 },{ 00:24:03.795 "params": { 00:24:03.795 "name": "Nvme8", 00:24:03.795 "trtype": "tcp", 00:24:03.795 "traddr": "10.0.0.2", 00:24:03.795 "adrfam": "ipv4", 00:24:03.795 "trsvcid": "4420", 00:24:03.795 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:03.795 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:03.795 "hdgst": false, 00:24:03.795 "ddgst": false 00:24:03.795 }, 00:24:03.795 "method": "bdev_nvme_attach_controller" 00:24:03.795 },{ 00:24:03.795 "params": { 00:24:03.795 "name": "Nvme9", 00:24:03.795 "trtype": "tcp", 00:24:03.795 "traddr": "10.0.0.2", 00:24:03.795 "adrfam": "ipv4", 00:24:03.795 "trsvcid": "4420", 00:24:03.795 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:03.795 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:03.795 "hdgst": false, 00:24:03.795 "ddgst": false 00:24:03.795 }, 00:24:03.795 "method": "bdev_nvme_attach_controller" 00:24:03.795 },{ 00:24:03.795 "params": { 00:24:03.795 "name": "Nvme10", 00:24:03.795 "trtype": "tcp", 00:24:03.795 "traddr": "10.0.0.2", 00:24:03.795 "adrfam": "ipv4", 00:24:03.795 "trsvcid": "4420", 00:24:03.795 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:03.795 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:03.795 "hdgst": false, 00:24:03.795 "ddgst": false 00:24:03.795 }, 00:24:03.795 "method": "bdev_nvme_attach_controller" 00:24:03.795 }' 00:24:03.795 [2024-11-19 11:18:11.966106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.795 [2024-11-19 11:18:12.002768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 10801 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:05.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 10801 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:05.182 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 10376 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.127 { 00:24:06.127 "params": { 00:24:06.127 "name": "Nvme$subsystem", 00:24:06.127 "trtype": "$TEST_TRANSPORT", 00:24:06.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.127 "adrfam": "ipv4", 00:24:06.127 "trsvcid": "$NVMF_PORT", 00:24:06.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.127 "hdgst": ${hdgst:-false}, 00:24:06.127 "ddgst": ${ddgst:-false} 00:24:06.127 }, 00:24:06.127 "method": "bdev_nvme_attach_controller" 00:24:06.127 } 00:24:06.127 EOF 00:24:06.127 )") 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.127 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.127 { 00:24:06.127 "params": { 00:24:06.128 "name": "Nvme$subsystem", 00:24:06.128 "trtype": "$TEST_TRANSPORT", 00:24:06.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.128 "adrfam": "ipv4", 00:24:06.128 "trsvcid": "$NVMF_PORT", 00:24:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.128 "hdgst": ${hdgst:-false}, 00:24:06.128 "ddgst": ${ddgst:-false} 00:24:06.128 }, 00:24:06.128 "method": "bdev_nvme_attach_controller" 00:24:06.128 } 00:24:06.128 EOF 00:24:06.128 )") 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.128 { 00:24:06.128 "params": { 00:24:06.128 "name": "Nvme$subsystem", 00:24:06.128 "trtype": "$TEST_TRANSPORT", 00:24:06.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.128 "adrfam": "ipv4", 00:24:06.128 "trsvcid": "$NVMF_PORT", 00:24:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.128 "hdgst": ${hdgst:-false}, 00:24:06.128 "ddgst": ${ddgst:-false} 00:24:06.128 }, 00:24:06.128 "method": "bdev_nvme_attach_controller" 00:24:06.128 } 00:24:06.128 EOF 00:24:06.128 )") 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.128 { 00:24:06.128 "params": { 00:24:06.128 "name": "Nvme$subsystem", 00:24:06.128 "trtype": "$TEST_TRANSPORT", 00:24:06.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.128 "adrfam": "ipv4", 00:24:06.128 "trsvcid": "$NVMF_PORT", 00:24:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.128 "hdgst": ${hdgst:-false}, 00:24:06.128 "ddgst": ${ddgst:-false} 00:24:06.128 }, 00:24:06.128 "method": "bdev_nvme_attach_controller" 00:24:06.128 } 00:24:06.128 EOF 00:24:06.128 )") 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.128 { 00:24:06.128 "params": { 00:24:06.128 "name": "Nvme$subsystem", 00:24:06.128 "trtype": "$TEST_TRANSPORT", 00:24:06.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.128 "adrfam": "ipv4", 00:24:06.128 "trsvcid": "$NVMF_PORT", 00:24:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.128 "hdgst": ${hdgst:-false}, 00:24:06.128 "ddgst": ${ddgst:-false} 00:24:06.128 }, 00:24:06.128 "method": "bdev_nvme_attach_controller" 00:24:06.128 } 00:24:06.128 EOF 00:24:06.128 )") 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.128 { 00:24:06.128 "params": { 00:24:06.128 "name": "Nvme$subsystem", 00:24:06.128 "trtype": "$TEST_TRANSPORT", 00:24:06.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.128 "adrfam": "ipv4", 00:24:06.128 "trsvcid": "$NVMF_PORT", 00:24:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.128 "hdgst": ${hdgst:-false}, 00:24:06.128 "ddgst": ${ddgst:-false} 00:24:06.128 }, 00:24:06.128 "method": "bdev_nvme_attach_controller" 00:24:06.128 } 00:24:06.128 EOF 00:24:06.128 )") 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.128 { 00:24:06.128 "params": { 00:24:06.128 "name": "Nvme$subsystem", 00:24:06.128 "trtype": "$TEST_TRANSPORT", 00:24:06.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.128 "adrfam": "ipv4", 00:24:06.128 "trsvcid": "$NVMF_PORT", 00:24:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.128 "hdgst": ${hdgst:-false}, 00:24:06.128 "ddgst": ${ddgst:-false} 00:24:06.128 }, 00:24:06.128 "method": "bdev_nvme_attach_controller" 00:24:06.128 } 00:24:06.128 EOF 00:24:06.128 )") 00:24:06.128 [2024-11-19 11:18:14.461816] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:06.128 [2024-11-19 11:18:14.461875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11315 ] 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.128 { 00:24:06.128 "params": { 00:24:06.128 "name": "Nvme$subsystem", 00:24:06.128 "trtype": "$TEST_TRANSPORT", 00:24:06.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.128 "adrfam": "ipv4", 00:24:06.128 "trsvcid": "$NVMF_PORT", 00:24:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.128 "hdgst": ${hdgst:-false}, 00:24:06.128 "ddgst": ${ddgst:-false} 00:24:06.128 }, 00:24:06.128 "method": "bdev_nvme_attach_controller" 00:24:06.128 } 00:24:06.128 EOF 00:24:06.128 )") 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.128 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.128 { 00:24:06.128 "params": { 00:24:06.128 "name": "Nvme$subsystem", 00:24:06.128 "trtype": "$TEST_TRANSPORT", 00:24:06.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.128 "adrfam": "ipv4", 00:24:06.128 "trsvcid": "$NVMF_PORT", 00:24:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.128 "hdgst": ${hdgst:-false}, 00:24:06.128 "ddgst": ${ddgst:-false} 00:24:06.128 }, 00:24:06.128 "method": "bdev_nvme_attach_controller" 00:24:06.128 } 00:24:06.128 EOF 00:24:06.128 )") 00:24:06.390 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.390 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.390 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.390 { 00:24:06.390 "params": { 00:24:06.390 "name": "Nvme$subsystem", 00:24:06.390 "trtype": "$TEST_TRANSPORT", 00:24:06.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.390 "adrfam": "ipv4", 00:24:06.390 "trsvcid": "$NVMF_PORT", 00:24:06.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.390 "hdgst": ${hdgst:-false}, 00:24:06.390 "ddgst": ${ddgst:-false} 00:24:06.390 }, 00:24:06.390 "method": "bdev_nvme_attach_controller" 00:24:06.390 } 00:24:06.390 EOF 00:24:06.390 )") 00:24:06.390 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:06.390 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:06.390 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:06.390 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:06.390 "params": { 00:24:06.390 "name": "Nvme1", 00:24:06.390 "trtype": "tcp", 00:24:06.390 "traddr": "10.0.0.2", 00:24:06.390 "adrfam": "ipv4", 00:24:06.390 "trsvcid": "4420", 00:24:06.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.390 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme2", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme3", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme4", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme5", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme6", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme7", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme8", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme9", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 },{ 00:24:06.391 "params": { 00:24:06.391 "name": "Nvme10", 00:24:06.391 "trtype": "tcp", 00:24:06.391 "traddr": "10.0.0.2", 00:24:06.391 "adrfam": "ipv4", 00:24:06.391 "trsvcid": "4420", 00:24:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:06.391 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:06.391 "hdgst": false, 00:24:06.391 "ddgst": false 00:24:06.391 }, 00:24:06.391 "method": "bdev_nvme_attach_controller" 00:24:06.391 }' 00:24:06.391 [2024-11-19 11:18:14.540978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.391 [2024-11-19 11:18:14.576968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.778 Running I/O for 1 seconds... 00:24:09.161 1860.00 IOPS, 116.25 MiB/s 00:24:09.161 Latency(us) 00:24:09.161 [2024-11-19T10:18:17.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.161 Verification LBA range: start 0x0 length 0x400 00:24:09.161 Nvme1n1 : 1.15 222.40 13.90 0.00 0.00 284902.19 20643.84 246415.36 00:24:09.161 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme2n1 : 1.15 222.99 13.94 0.00 0.00 278072.96 20206.93 246415.36 00:24:09.162 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme3n1 : 1.10 233.00 14.56 0.00 0.00 262147.84 18131.63 251658.24 00:24:09.162 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme4n1 : 1.07 238.22 14.89 0.00 0.00 251223.68 18459.31 256901.12 00:24:09.162 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme5n1 : 1.13 225.66 14.10 0.00 0.00 261383.47 18131.63 248162.99 00:24:09.162 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme6n1 : 1.14 224.09 14.01 0.00 0.00 258347.31 15182.51 246415.36 00:24:09.162 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme7n1 : 1.18 270.11 16.88 0.00 0.00 211288.92 9939.63 248162.99 00:24:09.162 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme8n1 : 1.15 277.20 17.33 0.00 0.00 201407.66 14745.60 246415.36 00:24:09.162 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme9n1 : 1.16 220.71 13.79 0.00 0.00 248377.39 19879.25 283115.52 00:24:09.162 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.162 Verification LBA range: start 0x0 length 0x400 00:24:09.162 Nvme10n1 : 1.20 323.23 20.20 0.00 0.00 167218.07 6171.31 242920.11 00:24:09.162 [2024-11-19T10:18:17.514Z] =================================================================================================================== 00:24:09.162 [2024-11-19T10:18:17.514Z] Total : 2457.63 153.60 0.00 0.00 237278.00 6171.31 283115.52 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.162 rmmod nvme_tcp 00:24:09.162 rmmod nvme_fabrics 00:24:09.162 rmmod nvme_keyring 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 10376 ']' 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 10376 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 10376 ']' 00:24:09.162 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 10376 00:24:09.423 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:09.423 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.423 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 10376 00:24:09.423 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.423 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.423 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 10376' 00:24:09.423 killing process with pid 10376 00:24:09.423 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 10376 00:24:09.423 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 10376 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.684 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.685 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.685 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.685 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.598 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:11.598 00:24:11.598 real 0m18.152s 00:24:11.598 user 0m35.335s 00:24:11.598 sys 0m7.654s 00:24:11.598 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.598 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:11.598 ************************************ 00:24:11.598 END TEST nvmf_shutdown_tc1 00:24:11.598 ************************************ 00:24:11.598 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:11.598 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:11.598 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.598 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:11.860 ************************************ 00:24:11.860 START TEST nvmf_shutdown_tc2 00:24:11.860 ************************************ 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:11.860 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:11.860 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:11.861 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:11.861 Found net devices under 0000:31:00.0: cvl_0_0 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.861 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:11.861 Found net devices under 0000:31:00.1: cvl_0_1 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:11.861 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.122 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.122 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:24:12.123 00:24:12.123 --- 10.0.0.2 ping statistics --- 00:24:12.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.123 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:24:12.123 00:24:12.123 --- 10.0.0.1 ping statistics --- 00:24:12.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.123 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=12604 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 12604 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 12604 ']' 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.123 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.123 [2024-11-19 11:18:20.398659] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:12.123 [2024-11-19 11:18:20.398717] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.384 [2024-11-19 11:18:20.500950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.384 [2024-11-19 11:18:20.539319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.384 [2024-11-19 11:18:20.539359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.384 [2024-11-19 11:18:20.539366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.384 [2024-11-19 11:18:20.539371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.385 [2024-11-19 11:18:20.539375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.385 [2024-11-19 11:18:20.540833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.385 [2024-11-19 11:18:20.540994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.385 [2024-11-19 11:18:20.541128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.385 [2024-11-19 11:18:20.541129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.958 [2024-11-19 11:18:21.249197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.958 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:12.959 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.220 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:13.220 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:13.220 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.220 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.220 Malloc1 00:24:13.220 [2024-11-19 11:18:21.358793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.220 Malloc2 00:24:13.220 Malloc3 00:24:13.220 Malloc4 00:24:13.220 Malloc5 00:24:13.220 Malloc6 00:24:13.220 Malloc7 00:24:13.481 Malloc8 00:24:13.481 Malloc9 00:24:13.481 Malloc10 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=12986 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 12986 /var/tmp/bdevperf.sock 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 12986 ']' 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:13.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.481 { 00:24:13.481 "params": { 00:24:13.481 "name": "Nvme$subsystem", 00:24:13.481 "trtype": "$TEST_TRANSPORT", 00:24:13.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.481 "adrfam": "ipv4", 00:24:13.481 "trsvcid": "$NVMF_PORT", 00:24:13.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.481 "hdgst": ${hdgst:-false}, 00:24:13.481 "ddgst": ${ddgst:-false} 00:24:13.481 }, 00:24:13.481 "method": "bdev_nvme_attach_controller" 00:24:13.481 } 00:24:13.481 EOF 00:24:13.481 )") 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.481 { 00:24:13.481 "params": { 00:24:13.481 "name": "Nvme$subsystem", 00:24:13.481 "trtype": "$TEST_TRANSPORT", 00:24:13.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.481 "adrfam": "ipv4", 00:24:13.481 "trsvcid": "$NVMF_PORT", 00:24:13.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.481 "hdgst": ${hdgst:-false}, 00:24:13.481 "ddgst": ${ddgst:-false} 00:24:13.481 }, 00:24:13.481 "method": "bdev_nvme_attach_controller" 00:24:13.481 } 00:24:13.481 EOF 00:24:13.481 )") 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.481 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.482 { 00:24:13.482 "params": { 00:24:13.482 "name": "Nvme$subsystem", 00:24:13.482 "trtype": "$TEST_TRANSPORT", 00:24:13.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.482 "adrfam": "ipv4", 00:24:13.482 "trsvcid": "$NVMF_PORT", 00:24:13.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.482 "hdgst": ${hdgst:-false}, 00:24:13.482 "ddgst": ${ddgst:-false} 00:24:13.482 }, 00:24:13.482 "method": "bdev_nvme_attach_controller" 00:24:13.482 } 00:24:13.482 EOF 00:24:13.482 )") 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.482 { 00:24:13.482 "params": { 00:24:13.482 "name": "Nvme$subsystem", 00:24:13.482 "trtype": "$TEST_TRANSPORT", 00:24:13.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.482 "adrfam": "ipv4", 00:24:13.482 "trsvcid": "$NVMF_PORT", 00:24:13.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.482 "hdgst": ${hdgst:-false}, 00:24:13.482 "ddgst": ${ddgst:-false} 00:24:13.482 }, 00:24:13.482 "method": "bdev_nvme_attach_controller" 00:24:13.482 } 00:24:13.482 EOF 00:24:13.482 )") 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.482 { 00:24:13.482 "params": { 00:24:13.482 "name": "Nvme$subsystem", 00:24:13.482 "trtype": "$TEST_TRANSPORT", 00:24:13.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.482 "adrfam": "ipv4", 00:24:13.482 "trsvcid": "$NVMF_PORT", 00:24:13.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.482 "hdgst": ${hdgst:-false}, 00:24:13.482 "ddgst": ${ddgst:-false} 00:24:13.482 }, 00:24:13.482 "method": "bdev_nvme_attach_controller" 00:24:13.482 } 00:24:13.482 EOF 00:24:13.482 )") 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.482 { 00:24:13.482 "params": { 00:24:13.482 "name": "Nvme$subsystem", 00:24:13.482 "trtype": "$TEST_TRANSPORT", 00:24:13.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.482 "adrfam": "ipv4", 00:24:13.482 "trsvcid": "$NVMF_PORT", 00:24:13.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.482 "hdgst": ${hdgst:-false}, 00:24:13.482 "ddgst": ${ddgst:-false} 00:24:13.482 }, 00:24:13.482 "method": "bdev_nvme_attach_controller" 00:24:13.482 } 00:24:13.482 EOF 00:24:13.482 )") 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.482 [2024-11-19 11:18:21.804257] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:13.482 [2024-11-19 11:18:21.804314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12986 ] 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.482 { 00:24:13.482 "params": { 00:24:13.482 "name": "Nvme$subsystem", 00:24:13.482 "trtype": "$TEST_TRANSPORT", 00:24:13.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.482 "adrfam": "ipv4", 00:24:13.482 "trsvcid": "$NVMF_PORT", 00:24:13.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.482 "hdgst": ${hdgst:-false}, 00:24:13.482 "ddgst": ${ddgst:-false} 00:24:13.482 }, 00:24:13.482 "method": "bdev_nvme_attach_controller" 00:24:13.482 } 00:24:13.482 EOF 00:24:13.482 )") 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.482 { 00:24:13.482 "params": { 00:24:13.482 "name": "Nvme$subsystem", 00:24:13.482 "trtype": "$TEST_TRANSPORT", 00:24:13.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.482 "adrfam": "ipv4", 00:24:13.482 "trsvcid": "$NVMF_PORT", 00:24:13.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.482 "hdgst": ${hdgst:-false}, 00:24:13.482 "ddgst": ${ddgst:-false} 00:24:13.482 }, 00:24:13.482 "method": "bdev_nvme_attach_controller" 00:24:13.482 } 00:24:13.482 EOF 00:24:13.482 )") 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.482 { 00:24:13.482 "params": { 00:24:13.482 "name": "Nvme$subsystem", 00:24:13.482 "trtype": "$TEST_TRANSPORT", 00:24:13.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.482 "adrfam": "ipv4", 00:24:13.482 "trsvcid": "$NVMF_PORT", 00:24:13.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.482 "hdgst": ${hdgst:-false}, 00:24:13.482 "ddgst": ${ddgst:-false} 00:24:13.482 }, 00:24:13.482 "method": "bdev_nvme_attach_controller" 00:24:13.482 } 00:24:13.482 EOF 00:24:13.482 )") 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.482 { 00:24:13.482 "params": { 00:24:13.482 "name": "Nvme$subsystem", 00:24:13.482 "trtype": "$TEST_TRANSPORT", 00:24:13.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.482 "adrfam": "ipv4", 00:24:13.482 "trsvcid": "$NVMF_PORT", 00:24:13.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.482 "hdgst": ${hdgst:-false}, 00:24:13.482 "ddgst": ${ddgst:-false} 00:24:13.482 }, 00:24:13.482 "method": "bdev_nvme_attach_controller" 00:24:13.482 } 00:24:13.482 EOF 00:24:13.482 )") 00:24:13.482 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:13.743 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:13.743 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:13.743 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme1", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme2", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme3", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme4", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme5", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme6", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme7", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme8", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme9", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:13.743 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:13.743 "hdgst": false, 00:24:13.743 "ddgst": false 00:24:13.743 }, 00:24:13.743 "method": "bdev_nvme_attach_controller" 00:24:13.743 },{ 00:24:13.743 "params": { 00:24:13.743 "name": "Nvme10", 00:24:13.743 "trtype": "tcp", 00:24:13.743 "traddr": "10.0.0.2", 00:24:13.743 "adrfam": "ipv4", 00:24:13.743 "trsvcid": "4420", 00:24:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:13.744 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:13.744 "hdgst": false, 00:24:13.744 "ddgst": false 00:24:13.744 }, 00:24:13.744 "method": "bdev_nvme_attach_controller" 00:24:13.744 }' 00:24:13.744 [2024-11-19 11:18:21.883470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.744 [2024-11-19 11:18:21.919692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.131 Running I/O for 10 seconds... 00:24:15.131 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.131 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:15.131 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:15.131 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.131 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:15.393 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:15.655 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 12986 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 12986 ']' 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 12986 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.916 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 12986 00:24:16.178 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.178 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.178 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 12986' 00:24:16.178 killing process with pid 12986 00:24:16.178 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 12986 00:24:16.178 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 12986 00:24:16.178 Received shutdown signal, test time was about 0.986348 seconds 00:24:16.178 00:24:16.178 Latency(us) 00:24:16.178 [2024-11-19T10:18:24.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme1n1 : 0.99 252.67 15.79 0.00 0.00 248562.39 5051.73 244667.73 00:24:16.178 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme2n1 : 0.96 200.44 12.53 0.00 0.00 309239.75 16820.91 253405.87 00:24:16.178 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme3n1 : 0.98 262.48 16.40 0.00 0.00 231444.05 15510.19 242920.11 00:24:16.178 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme4n1 : 0.96 266.06 16.63 0.00 0.00 223547.84 11960.32 253405.87 00:24:16.178 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme5n1 : 0.98 261.87 16.37 0.00 0.00 222651.73 21845.33 225443.84 00:24:16.178 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme6n1 : 0.95 202.26 12.64 0.00 0.00 281184.14 16384.00 241172.48 00:24:16.178 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme7n1 : 0.98 260.59 16.29 0.00 0.00 213749.12 19770.03 248162.99 00:24:16.178 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme8n1 : 0.98 265.44 16.59 0.00 0.00 204332.22 6171.31 248162.99 00:24:16.178 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme9n1 : 0.97 263.86 16.49 0.00 0.00 201849.17 40413.87 225443.84 00:24:16.178 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.178 Verification LBA range: start 0x0 length 0x400 00:24:16.178 Nvme10n1 : 0.97 198.66 12.42 0.00 0.00 261775.93 21626.88 274377.39 00:24:16.178 [2024-11-19T10:18:24.530Z] =================================================================================================================== 00:24:16.178 [2024-11-19T10:18:24.530Z] Total : 2434.34 152.15 0.00 0.00 236156.75 5051.73 274377.39 00:24:16.178 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 12604 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.565 rmmod nvme_tcp 00:24:17.565 rmmod nvme_fabrics 00:24:17.565 rmmod nvme_keyring 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 12604 ']' 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 12604 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 12604 ']' 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 12604 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 12604 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 12604' 00:24:17.565 killing process with pid 12604 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 12604 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 12604 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.565 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.566 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.566 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.566 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.115 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.115 00:24:20.115 real 0m7.961s 00:24:20.115 user 0m24.088s 00:24:20.115 sys 0m1.358s 00:24:20.115 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.116 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.116 ************************************ 00:24:20.116 END TEST nvmf_shutdown_tc2 00:24:20.116 ************************************ 00:24:20.116 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:20.116 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:20.116 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.116 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:20.116 ************************************ 00:24:20.116 START TEST nvmf_shutdown_tc3 00:24:20.116 ************************************ 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:20.116 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:20.116 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:20.116 Found net devices under 0000:31:00.0: cvl_0_0 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.116 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:20.117 Found net devices under 0000:31:00.1: cvl_0_1 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:24:20.117 00:24:20.117 --- 10.0.0.2 ping statistics --- 00:24:20.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.117 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:24:20.117 00:24:20.117 --- 10.0.0.1 ping statistics --- 00:24:20.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.117 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=14311 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 14311 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 14311 ']' 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.117 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.378 [2024-11-19 11:18:28.495729] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:20.378 [2024-11-19 11:18:28.495799] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.378 [2024-11-19 11:18:28.599939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.378 [2024-11-19 11:18:28.634018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.378 [2024-11-19 11:18:28.634052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.378 [2024-11-19 11:18:28.634058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.378 [2024-11-19 11:18:28.634062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.378 [2024-11-19 11:18:28.634067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.378 [2024-11-19 11:18:28.635372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.378 [2024-11-19 11:18:28.635529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.378 [2024-11-19 11:18:28.635686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.378 [2024-11-19 11:18:28.635687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:20.950 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.950 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:20.950 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.950 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.950 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:21.212 [2024-11-19 11:18:29.346037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.212 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:21.212 Malloc1 00:24:21.212 [2024-11-19 11:18:29.452605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.212 Malloc2 00:24:21.212 Malloc3 00:24:21.212 Malloc4 00:24:21.474 Malloc5 00:24:21.474 Malloc6 00:24:21.474 Malloc7 00:24:21.474 Malloc8 00:24:21.474 Malloc9 00:24:21.474 Malloc10 00:24:21.474 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.474 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:21.474 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.474 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=14546 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 14546 /var/tmp/bdevperf.sock 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 14546 ']' 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.737 { 00:24:21.737 "params": { 00:24:21.737 "name": "Nvme$subsystem", 00:24:21.737 "trtype": "$TEST_TRANSPORT", 00:24:21.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "$NVMF_PORT", 00:24:21.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.737 "hdgst": ${hdgst:-false}, 00:24:21.737 "ddgst": ${ddgst:-false} 00:24:21.737 }, 00:24:21.737 "method": "bdev_nvme_attach_controller" 00:24:21.737 } 00:24:21.737 EOF 00:24:21.737 )") 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.737 { 00:24:21.737 "params": { 00:24:21.737 "name": "Nvme$subsystem", 00:24:21.737 "trtype": "$TEST_TRANSPORT", 00:24:21.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "$NVMF_PORT", 00:24:21.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.737 "hdgst": ${hdgst:-false}, 00:24:21.737 "ddgst": ${ddgst:-false} 00:24:21.737 }, 00:24:21.737 "method": "bdev_nvme_attach_controller" 00:24:21.737 } 00:24:21.737 EOF 00:24:21.737 )") 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.737 { 00:24:21.737 "params": { 00:24:21.737 "name": "Nvme$subsystem", 00:24:21.737 "trtype": "$TEST_TRANSPORT", 00:24:21.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "$NVMF_PORT", 00:24:21.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.737 "hdgst": ${hdgst:-false}, 00:24:21.737 "ddgst": ${ddgst:-false} 00:24:21.737 }, 00:24:21.737 "method": "bdev_nvme_attach_controller" 00:24:21.737 } 00:24:21.737 EOF 00:24:21.737 )") 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.737 { 00:24:21.737 "params": { 00:24:21.737 "name": "Nvme$subsystem", 00:24:21.737 "trtype": "$TEST_TRANSPORT", 00:24:21.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "$NVMF_PORT", 00:24:21.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.737 "hdgst": ${hdgst:-false}, 00:24:21.737 "ddgst": ${ddgst:-false} 00:24:21.737 }, 00:24:21.737 "method": "bdev_nvme_attach_controller" 00:24:21.737 } 00:24:21.737 EOF 00:24:21.737 )") 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.737 { 00:24:21.737 "params": { 00:24:21.737 "name": "Nvme$subsystem", 00:24:21.737 "trtype": "$TEST_TRANSPORT", 00:24:21.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "$NVMF_PORT", 00:24:21.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.737 "hdgst": ${hdgst:-false}, 00:24:21.737 "ddgst": ${ddgst:-false} 00:24:21.737 }, 00:24:21.737 "method": "bdev_nvme_attach_controller" 00:24:21.737 } 00:24:21.737 EOF 00:24:21.737 )") 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.737 { 00:24:21.737 "params": { 00:24:21.737 "name": "Nvme$subsystem", 00:24:21.737 "trtype": "$TEST_TRANSPORT", 00:24:21.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "$NVMF_PORT", 00:24:21.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.737 "hdgst": ${hdgst:-false}, 00:24:21.737 "ddgst": ${ddgst:-false} 00:24:21.737 }, 00:24:21.737 "method": "bdev_nvme_attach_controller" 00:24:21.737 } 00:24:21.737 EOF 00:24:21.737 )") 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.737 [2024-11-19 11:18:29.900855] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:21.737 [2024-11-19 11:18:29.900916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14546 ] 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.737 { 00:24:21.737 "params": { 00:24:21.737 "name": "Nvme$subsystem", 00:24:21.737 "trtype": "$TEST_TRANSPORT", 00:24:21.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "$NVMF_PORT", 00:24:21.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.737 "hdgst": ${hdgst:-false}, 00:24:21.737 "ddgst": ${ddgst:-false} 00:24:21.737 }, 00:24:21.737 "method": "bdev_nvme_attach_controller" 00:24:21.737 } 00:24:21.737 EOF 00:24:21.737 )") 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.737 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.737 { 00:24:21.737 "params": { 00:24:21.737 "name": "Nvme$subsystem", 00:24:21.737 "trtype": "$TEST_TRANSPORT", 00:24:21.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "$NVMF_PORT", 00:24:21.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.738 "hdgst": ${hdgst:-false}, 00:24:21.738 "ddgst": ${ddgst:-false} 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 } 00:24:21.738 EOF 00:24:21.738 )") 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.738 { 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme$subsystem", 00:24:21.738 "trtype": "$TEST_TRANSPORT", 00:24:21.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "$NVMF_PORT", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.738 "hdgst": ${hdgst:-false}, 00:24:21.738 "ddgst": ${ddgst:-false} 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 } 00:24:21.738 EOF 00:24:21.738 )") 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.738 { 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme$subsystem", 00:24:21.738 "trtype": "$TEST_TRANSPORT", 00:24:21.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "$NVMF_PORT", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.738 "hdgst": ${hdgst:-false}, 00:24:21.738 "ddgst": ${ddgst:-false} 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 } 00:24:21.738 EOF 00:24:21.738 )") 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:24:21.738 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme1", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme2", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme3", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme4", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme5", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme6", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme7", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme8", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme9", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 },{ 00:24:21.738 "params": { 00:24:21.738 "name": "Nvme10", 00:24:21.738 "trtype": "tcp", 00:24:21.738 "traddr": "10.0.0.2", 00:24:21.738 "adrfam": "ipv4", 00:24:21.738 "trsvcid": "4420", 00:24:21.738 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:21.738 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:21.738 "hdgst": false, 00:24:21.738 "ddgst": false 00:24:21.738 }, 00:24:21.738 "method": "bdev_nvme_attach_controller" 00:24:21.738 }' 00:24:21.738 [2024-11-19 11:18:29.979538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.738 [2024-11-19 11:18:30.016652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.125 Running I/O for 10 seconds... 00:24:23.125 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.125 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:23.125 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:23.125 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.125 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:23.386 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=74 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 74 -ge 100 ']' 00:24:23.648 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:23.909 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:23.909 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:23.909 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.909 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.909 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.909 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=138 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 138 -ge 100 ']' 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 14311 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 14311 ']' 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 14311 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 14311 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 14311' 00:24:24.204 killing process with pid 14311 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 14311 00:24:24.204 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 14311 00:24:24.204 [2024-11-19 11:18:32.371468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.204 [2024-11-19 11:18:32.371729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.371815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168bc0 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.205 [2024-11-19 11:18:32.373792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.373848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149860 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.375078] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.206 [2024-11-19 11:18:32.375140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169090 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.375727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169560 is same with the state(6) to be set 00:24:24.206 [2024-11-19 11:18:32.375790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.375985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.375992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.206 [2024-11-19 11:18:32.376256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.206 [2024-11-19 11:18:32.376263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with [2024-11-19 11:18:32.376326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1the state(6) to be set 00:24:24.207 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:18:32.376392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:18:32.376433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-11-19 11:18:32.376463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:18:32.376509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.207 [2024-11-19 11:18:32.376534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.207 [2024-11-19 11:18:32.376539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.207 [2024-11-19 11:18:32.376546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with [2024-11-19 11:18:32.376567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:24.208 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with [2024-11-19 11:18:32.376579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1the state(6) to be set 00:24:24.208 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with [2024-11-19 11:18:32.376616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1the state(6) to be set 00:24:24.208 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:18:32.376645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169a50 is same with the state(6) to be set 00:24:24.208 [2024-11-19 11:18:32.376711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.208 [2024-11-19 11:18:32.376865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.208 [2024-11-19 11:18:32.376872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-11-19 11:18:32.376882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-11-19 11:18:32.376889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-11-19 11:18:32.376899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-11-19 11:18:32.376906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-11-19 11:18:32.376920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-11-19 11:18:32.376928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-11-19 11:18:32.376937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-11-19 11:18:32.376945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-11-19 11:18:32.376954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-11-19 11:18:32.376961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-11-19 11:18:32.377292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f20 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f20 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169f20 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.377999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.209 [2024-11-19 11:18:32.378209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.378261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a3f0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef92d0 is same with the state(6) to be set 00:24:24.210 [2024-11-19 11:18:32.379980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.379994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef97a0 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.211 [2024-11-19 11:18:32.380783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.380872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.394998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.395060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef9c70 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.397699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-11-19 11:18:32.397721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-11-19 11:18:32.397730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-11-19 11:18:32.397738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-11-19 11:18:32.397747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-11-19 11:18:32.397754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-11-19 11:18:32.397762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-11-19 11:18:32.397770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-11-19 11:18:32.397777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12895a0 is same with the state(6) to be set 00:24:24.212 [2024-11-19 11:18:32.397811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-11-19 11:18:32.397821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-11-19 11:18:32.397829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-11-19 11:18:32.397837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-11-19 11:18:32.397845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-11-19 11:18:32.397852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-11-19 11:18:32.397861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-11-19 11:18:32.397875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.397882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129e2c0 is same with the state(6) to be set 00:24:24.213 [2024-11-19 11:18:32.397910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.397919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.397927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.397935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.397944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.397951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.397959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.397966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.397973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1288860 is same with the state(6) to be set 00:24:24.213 [2024-11-19 11:18:32.397999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20850 is same with the state(6) to be set 00:24:24.213 [2024-11-19 11:18:32.398086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259c50 is same with the state(6) to be set 00:24:24.213 [2024-11-19 11:18:32.398176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27d40 is same with the state(6) to be set 00:24:24.213 [2024-11-19 11:18:32.398266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe22080 is same with the state(6) to be set 00:24:24.213 [2024-11-19 11:18:32.398353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12b00 is same with the state(6) to be set 00:24:24.213 [2024-11-19 11:18:32.398439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-11-19 11:18:32.398502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e610 is same with the state(6) to be set 00:24:24.213 [2024-11-19 11:18:32.398525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-11-19 11:18:32.398533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.398542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-11-19 11:18:32.398549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.398558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-11-19 11:18:32.398566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.398574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-11-19 11:18:32.398581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.398588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12490d0 is same with the state(6) to be set 00:24:24.214 [2024-11-19 11:18:32.399815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.399844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.399868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.399889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.399909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.399929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.399949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.399968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.399985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.399993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-11-19 11:18:32.400225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-11-19 11:18:32.400234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-11-19 11:18:32.400780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-11-19 11:18:32.400788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.400805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.400822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.400839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.400855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.400877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.400894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.400911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.400928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.400936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c350 is same with the state(6) to be set 00:24:24.216 [2024-11-19 11:18:32.401095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-11-19 11:18:32.401544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-11-19 11:18:32.401551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.401984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.401992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.402009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.402026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.402043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.402060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.402076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.402096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.402113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-11-19 11:18:32.402130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-11-19 11:18:32.402139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-11-19 11:18:32.402147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-11-19 11:18:32.402156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-11-19 11:18:32.402163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-11-19 11:18:32.402173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-11-19 11:18:32.402180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-11-19 11:18:32.402444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:24.218 [2024-11-19 11:18:32.402481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22080 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.405102] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.218 [2024-11-19 11:18:32.405133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:24.218 [2024-11-19 11:18:32.405145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:24.218 [2024-11-19 11:18:32.405157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1259c50 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.405167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20850 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.405648] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.218 [2024-11-19 11:18:32.406212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-11-19 11:18:32.406251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe22080 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-11-19 11:18:32.406264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe22080 is same with the state(6) to be set 00:24:24.218 [2024-11-19 11:18:32.406633] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.218 [2024-11-19 11:18:32.406947] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.218 [2024-11-19 11:18:32.406984] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.218 [2024-11-19 11:18:32.407020] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.218 [2024-11-19 11:18:32.407414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-11-19 11:18:32.407430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20850 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-11-19 11:18:32.407438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20850 is same with the state(6) to be set 00:24:24.218 [2024-11-19 11:18:32.407648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-11-19 11:18:32.407659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1259c50 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-11-19 11:18:32.407667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259c50 is same with the state(6) to be set 00:24:24.218 [2024-11-19 11:18:32.407678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22080 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20850 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1259c50 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:24.218 [2024-11-19 11:18:32.407803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:24.218 [2024-11-19 11:18:32.407812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:24.218 [2024-11-19 11:18:32.407822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:24.218 [2024-11-19 11:18:32.407835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12895a0 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129e2c0 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1288860 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27d40 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12b00 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3e610 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.407955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12490d0 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.408041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:24.218 [2024-11-19 11:18:32.408049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:24.218 [2024-11-19 11:18:32.408057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:24.218 [2024-11-19 11:18:32.408063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:24.218 [2024-11-19 11:18:32.408070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:24.218 [2024-11-19 11:18:32.408077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:24.218 [2024-11-19 11:18:32.408084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:24.218 [2024-11-19 11:18:32.408091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:24.218 [2024-11-19 11:18:32.415288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:24.218 [2024-11-19 11:18:32.415678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-11-19 11:18:32.415692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe22080 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-11-19 11:18:32.415699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe22080 is same with the state(6) to be set 00:24:24.218 [2024-11-19 11:18:32.415743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22080 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.415782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:24.218 [2024-11-19 11:18:32.415789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:24.218 [2024-11-19 11:18:32.415796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:24.218 [2024-11-19 11:18:32.415803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:24.218 [2024-11-19 11:18:32.416421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:24.218 [2024-11-19 11:18:32.416433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:24.218 [2024-11-19 11:18:32.416812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-11-19 11:18:32.416825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1259c50 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-11-19 11:18:32.416832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259c50 is same with the state(6) to be set 00:24:24.218 [2024-11-19 11:18:32.417342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-11-19 11:18:32.417383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20850 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-11-19 11:18:32.417395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20850 is same with the state(6) to be set 00:24:24.218 [2024-11-19 11:18:32.417450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1259c50 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.417462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20850 (9): Bad file descriptor 00:24:24.218 [2024-11-19 11:18:32.417500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:24.218 [2024-11-19 11:18:32.417508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:24.218 [2024-11-19 11:18:32.417515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:24.218 [2024-11-19 11:18:32.417523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:24.218 [2024-11-19 11:18:32.417532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:24.218 [2024-11-19 11:18:32.417539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:24.218 [2024-11-19 11:18:32.417545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:24.218 [2024-11-19 11:18:32.417552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:24.219 [2024-11-19 11:18:32.417972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-11-19 11:18:32.418556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-11-19 11:18:32.418563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.418990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.418997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.419006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.419014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.419023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.419031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.419040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.419047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.419057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.419064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.419073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.419081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-11-19 11:18:32.419090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-11-19 11:18:32.419098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.419106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102cef0 is same with the state(6) to be set 00:24:24.221 [2024-11-19 11:18:32.420393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-11-19 11:18:32.420986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-11-19 11:18:32.420994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.421504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.421513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12296f0 is same with the state(6) to be set 00:24:24.222 [2024-11-19 11:18:32.422837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.422850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.422866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.422875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-11-19 11:18:32.422887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-11-19 11:18:32.422896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.422906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.422913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.422923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.422931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.422940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.422948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.422958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.422965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.422975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.422983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.422992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-11-19 11:18:32.423505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-11-19 11:18:32.423514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.423956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.423964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122c130 is same with the state(6) to be set 00:24:24.224 [2024-11-19 11:18:32.425236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.425250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.425263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.425272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.425284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.425293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.425305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.425314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.425326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.425335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.425345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-11-19 11:18:32.425352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-11-19 11:18:32.425361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-11-19 11:18:32.425855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-11-19 11:18:32.425867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.425877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.425884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.425894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.425901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.425911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.425918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.425928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.425935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.425944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.425951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.425961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.425968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.425977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.425984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.425994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.426338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.426346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122d650 is same with the state(6) to be set 00:24:24.226 [2024-11-19 11:18:32.427617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.427630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.427643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.427652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.427664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.427673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.427684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.427693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.427704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.226 [2024-11-19 11:18:32.427712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-11-19 11:18:32.427722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.427985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.427993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-11-19 11:18:32.428264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-11-19 11:18:32.428274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.428721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.428729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122ebd0 is same with the state(6) to be set 00:24:24.228 [2024-11-19 11:18:32.430005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.430019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.430032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.430041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.430053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.430062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.430073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.430083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.430094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-11-19 11:18:32.430101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-11-19 11:18:32.430111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-11-19 11:18:32.430663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-11-19 11:18:32.430670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.430992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.430999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.431009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.431017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.431026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.431034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.431043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.431050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.431060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.431067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.431077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.431084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.431093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.431101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.431110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.431118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-11-19 11:18:32.431126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1230100 is same with the state(6) to be set 00:24:24.230 [2024-11-19 11:18:32.432401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-11-19 11:18:32.432415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-11-19 11:18:32.432970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-11-19 11:18:32.432980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.432987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.432997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-11-19 11:18:32.433496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-11-19 11:18:32.433504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030b10 is same with the state(6) to be set 00:24:24.232 [2024-11-19 11:18:32.435020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:24.232 [2024-11-19 11:18:32.435044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:24.233 [2024-11-19 11:18:32.435054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:24.233 [2024-11-19 11:18:32.435063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:24.233 [2024-11-19 11:18:32.435146] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:24:24.233 [2024-11-19 11:18:32.435164] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:24:24.233 [2024-11-19 11:18:32.435177] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:24:24.233 [2024-11-19 11:18:32.435259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:24.233 [2024-11-19 11:18:32.435270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:24.233 task offset: 26880 on job bdev=Nvme2n1 fails 00:24:24.233 00:24:24.233 Latency(us) 00:24:24.233 [2024-11-19T10:18:32.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.233 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme1n1 ended in about 0.97 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme1n1 : 0.97 204.90 12.81 65.90 0.00 233732.71 9885.01 242920.11 00:24:24.233 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme2n1 ended in about 0.95 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme2n1 : 0.95 201.95 12.62 67.32 0.00 230341.97 21517.65 242920.11 00:24:24.233 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme3n1 ended in about 0.95 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme3n1 : 0.95 201.13 12.57 67.04 0.00 226566.45 5734.40 230686.72 00:24:24.233 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme4n1 ended in about 0.97 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme4n1 : 0.97 197.21 12.33 65.74 0.00 226559.15 16602.45 249910.61 00:24:24.233 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme5n1 ended in about 0.96 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme5n1 : 0.96 201.92 12.62 66.96 0.00 216529.97 6526.29 221074.77 00:24:24.233 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme6n1 ended in about 0.98 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme6n1 : 0.98 199.79 12.49 65.57 0.00 215161.35 18568.53 249910.61 00:24:24.233 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme7n1 ended in about 0.98 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme7n1 : 0.98 130.82 8.18 65.41 0.00 284864.85 18022.40 269134.51 00:24:24.233 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme8n1 ended in about 0.98 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme8n1 : 0.98 195.76 12.24 65.25 0.00 209422.40 9338.88 253405.87 00:24:24.233 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme9n1 ended in about 0.98 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme9n1 : 0.98 130.19 8.14 65.10 0.00 273843.77 35607.89 263891.63 00:24:24.233 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme10n1 ended in about 0.99 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme10n1 : 0.99 129.88 8.12 64.94 0.00 268481.99 17257.81 258648.75 00:24:24.233 [2024-11-19T10:18:32.585Z] =================================================================================================================== 00:24:24.233 [2024-11-19T10:18:32.585Z] Total : 1793.56 112.10 659.23 0.00 235496.90 5734.40 269134.51 00:24:24.233 [2024-11-19 11:18:32.459472] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:24.233 [2024-11-19 11:18:32.459507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:24.233 [2024-11-19 11:18:32.460096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-11-19 11:18:32.460140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe12b00 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-11-19 11:18:32.460155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12b00 is same with the state(6) to be set 00:24:24.233 [2024-11-19 11:18:32.460510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-11-19 11:18:32.460522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27d40 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-11-19 11:18:32.460530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27d40 is same with the state(6) to be set 00:24:24.233 [2024-11-19 11:18:32.460887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-11-19 11:18:32.460898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12490d0 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-11-19 11:18:32.460905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12490d0 is same with the state(6) to be set 00:24:24.233 [2024-11-19 11:18:32.461118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-11-19 11:18:32.461128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3e610 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-11-19 11:18:32.461135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e610 is same with the state(6) to be set 00:24:24.233 [2024-11-19 11:18:32.463021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:24.233 [2024-11-19 11:18:32.463038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:24.233 [2024-11-19 11:18:32.463424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-11-19 11:18:32.463438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1288860 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-11-19 11:18:32.463445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1288860 is same with the state(6) to be set 00:24:24.233 [2024-11-19 11:18:32.463837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-11-19 11:18:32.463847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12895a0 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-11-19 11:18:32.463854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12895a0 is same with the state(6) to be set 00:24:24.233 [2024-11-19 11:18:32.464217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-11-19 11:18:32.464227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129e2c0 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-11-19 11:18:32.464235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129e2c0 is same with the state(6) to be set 00:24:24.233 [2024-11-19 11:18:32.464247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12b00 (9): Bad file descriptor 00:24:24.233 [2024-11-19 11:18:32.464259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27d40 (9): Bad file descriptor 00:24:24.233 [2024-11-19 11:18:32.464268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12490d0 (9): Bad file descriptor 00:24:24.233 [2024-11-19 11:18:32.464278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3e610 (9): Bad file descriptor 00:24:24.233 [2024-11-19 11:18:32.464310] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:24:24.233 [2024-11-19 11:18:32.464326] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:24.233 [2024-11-19 11:18:32.464342] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:24:24.233 [2024-11-19 11:18:32.464352] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:24:24.233 [2024-11-19 11:18:32.464364] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:24:24.233 [2024-11-19 11:18:32.464635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:24.233 [2024-11-19 11:18:32.464996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-11-19 11:18:32.465011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe22080 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-11-19 11:18:32.465019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe22080 is same with the state(6) to be set 00:24:24.234 [2024-11-19 11:18:32.465345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-11-19 11:18:32.465355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20850 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-11-19 11:18:32.465363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20850 is same with the state(6) to be set 00:24:24.234 [2024-11-19 11:18:32.465372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1288860 (9): Bad file descriptor 00:24:24.234 [2024-11-19 11:18:32.465382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12895a0 (9): Bad file descriptor 00:24:24.234 [2024-11-19 11:18:32.465391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129e2c0 (9): Bad file descriptor 00:24:24.234 [2024-11-19 11:18:32.465400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.465407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.465416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.465425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.465433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.465440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.465446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.465453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.465460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.465466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.465473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.465480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.465487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.465493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.465500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.465510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.465954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-11-19 11:18:32.465967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1259c50 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-11-19 11:18:32.465974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259c50 is same with the state(6) to be set 00:24:24.234 [2024-11-19 11:18:32.465983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22080 (9): Bad file descriptor 00:24:24.234 [2024-11-19 11:18:32.465992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20850 (9): Bad file descriptor 00:24:24.234 [2024-11-19 11:18:32.466001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.466007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.466014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.466022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.466029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.466035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.466042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.466048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.466055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.466062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.466069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.466075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.466102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1259c50 (9): Bad file descriptor 00:24:24.234 [2024-11-19 11:18:32.466112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.466118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.466125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.466132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.466139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.466145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.466152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.466158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:24.234 [2024-11-19 11:18:32.466184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:24.234 [2024-11-19 11:18:32.466191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:24.234 [2024-11-19 11:18:32.466200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:24.234 [2024-11-19 11:18:32.466207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:24.565 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 14546 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 14546 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 14546 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.563 rmmod nvme_tcp 00:24:25.563 rmmod nvme_fabrics 00:24:25.563 rmmod nvme_keyring 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 14311 ']' 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 14311 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 14311 ']' 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 14311 00:24:25.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (14311) - No such process 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 14311 is not found' 00:24:25.563 Process with pid 14311 is not found 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.563 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.564 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.482 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.482 00:24:27.482 real 0m7.804s 00:24:27.482 user 0m18.873s 00:24:27.482 sys 0m1.296s 00:24:27.482 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.482 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 ************************************ 00:24:27.482 END TEST nvmf_shutdown_tc3 00:24:27.482 ************************************ 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:27.744 ************************************ 00:24:27.744 START TEST nvmf_shutdown_tc4 00:24:27.744 ************************************ 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.744 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:27.745 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:27.745 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:27.745 Found net devices under 0000:31:00.0: cvl_0_0 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:27.745 Found net devices under 0000:31:00.1: cvl_0_1 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.745 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.745 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:24:28.013 00:24:28.013 --- 10.0.0.2 ping statistics --- 00:24:28.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.013 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:24:28.013 00:24:28.013 --- 10.0.0.1 ping statistics --- 00:24:28.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.013 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.013 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=15986 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 15986 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 15986 ']' 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.014 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:28.276 [2024-11-19 11:18:36.367958] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:28.276 [2024-11-19 11:18:36.368019] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.276 [2024-11-19 11:18:36.471613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.276 [2024-11-19 11:18:36.505114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.276 [2024-11-19 11:18:36.505146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.276 [2024-11-19 11:18:36.505152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.276 [2024-11-19 11:18:36.505157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.276 [2024-11-19 11:18:36.505161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.276 [2024-11-19 11:18:36.506497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.276 [2024-11-19 11:18:36.506654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.276 [2024-11-19 11:18:36.506810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.276 [2024-11-19 11:18:36.506811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:28.850 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.850 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:28.850 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.850 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.850 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:29.111 [2024-11-19 11:18:37.221198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:29.111 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.112 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:29.112 Malloc1 00:24:29.112 [2024-11-19 11:18:37.340665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.112 Malloc2 00:24:29.112 Malloc3 00:24:29.112 Malloc4 00:24:29.373 Malloc5 00:24:29.373 Malloc6 00:24:29.373 Malloc7 00:24:29.373 Malloc8 00:24:29.373 Malloc9 00:24:29.373 Malloc10 00:24:29.373 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.373 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:29.373 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.373 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:29.634 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=16368 00:24:29.634 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:29.634 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:29.634 [2024-11-19 11:18:37.807830] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 15986 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 15986 ']' 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 15986 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 15986 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 15986' 00:24:34.935 killing process with pid 15986 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 15986 00:24:34.935 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 15986 00:24:34.935 [2024-11-19 11:18:42.821948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da96f0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.821995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da96f0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da96f0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da96f0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da96f0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da96f0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da96f0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da96f0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da9be0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da9be0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da9be0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da9be0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da9be0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da9be0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da9be0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.822358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da9be0 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.825907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daaf60 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.825933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daaf60 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.825942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daaf60 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.825947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daaf60 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.825952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daaf60 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.825957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daaf60 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.825961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daaf60 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.825966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daaf60 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab430 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab430 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.826997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.827002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.827006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.827011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.827016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.827021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95560 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.827315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95a50 is same with the state(6) to be set 00:24:34.935 [2024-11-19 11:18:42.827329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95a50 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.827602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95f20 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.827616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95f20 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.827633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95f20 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.827837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95090 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.827858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95090 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.827872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95090 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.827877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95090 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.827882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95090 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92e60 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92e60 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92e60 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92e60 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92e60 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92e60 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93330 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93330 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93330 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93330 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93330 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93820 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93820 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93820 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93820 is same with the state(6) to be set 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 [2024-11-19 11:18:42.828809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92990 is same with the state(6) to be set 00:24:34.936 starting I/O failed: -6 00:24:34.936 [2024-11-19 11:18:42.828822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92990 is same with Write completed with error (sct=0, sc=8) 00:24:34.936 the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92990 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92990 is same with Write completed with error (sct=0, sc=8) 00:24:34.936 the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92990 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92990 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92990 is same with Write completed with error (sct=0, sc=8) 00:24:34.936 the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.828874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92990 is same with the state(6) to be set 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 [2024-11-19 11:18:42.829251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.936 NVMe io qpair process completion error 00:24:34.936 [2024-11-19 11:18:42.831733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d968c0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d968c0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d968c0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d968c0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d968c0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d968c0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96d90 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96d90 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96d90 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.831995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96d90 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97260 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d963f0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d963f0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d963f0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d963f0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d963f0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d963f0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d963f0 is same with the state(6) to be set 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 [2024-11-19 11:18:42.832789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d946d0 is same with Write completed with error (sct=0, sc=8) 00:24:34.936 the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d946d0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d946d0 is same with the state(6) to be set 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 [2024-11-19 11:18:42.832814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d946d0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d946d0 is same with the state(6) to be set 00:24:34.936 starting I/O failed: -6 00:24:34.936 [2024-11-19 11:18:42.832824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d946d0 is same with the state(6) to be set 00:24:34.936 [2024-11-19 11:18:42.832829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d946d0 is same with the state(6) to be set 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 [2024-11-19 11:18:42.832833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d946d0 is same with the state(6) to be set 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 Write completed with error (sct=0, sc=8) 00:24:34.936 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 [2024-11-19 11:18:42.833265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93d10 is same with Write completed with error (sct=0, sc=8) 00:24:34.937 the state(6) to be set 00:24:34.937 [2024-11-19 11:18:42.833278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93d10 is same with the state(6) to be set 00:24:34.937 [2024-11-19 11:18:42.833283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93d10 is same with the state(6) to be set 00:24:34.937 [2024-11-19 11:18:42.833287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93d10 is same with the state(6) to be set 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 [2024-11-19 11:18:42.833509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 [2024-11-19 11:18:42.834314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 [2024-11-19 11:18:42.835247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.937 Write completed with error (sct=0, sc=8) 00:24:34.937 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 [2024-11-19 11:18:42.836529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.938 NVMe io qpair process completion error 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 [2024-11-19 11:18:42.837886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 [2024-11-19 11:18:42.838735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.938 starting I/O failed: -6 00:24:34.938 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 [2024-11-19 11:18:42.839706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 [2024-11-19 11:18:42.842050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.939 NVMe io qpair process completion error 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.939 starting I/O failed: -6 00:24:34.939 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 [2024-11-19 11:18:42.843109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.940 starting I/O failed: -6 00:24:34.940 starting I/O failed: -6 00:24:34.940 starting I/O failed: -6 00:24:34.940 starting I/O failed: -6 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 [2024-11-19 11:18:42.844122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.940 starting I/O failed: -6 00:24:34.940 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 [2024-11-19 11:18:42.845069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 [2024-11-19 11:18:42.847056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.941 NVMe io qpair process completion error 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 [2024-11-19 11:18:42.848269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.941 starting I/O failed: -6 00:24:34.941 starting I/O failed: -6 00:24:34.941 starting I/O failed: -6 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.941 starting I/O failed: -6 00:24:34.941 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 [2024-11-19 11:18:42.849255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 [2024-11-19 11:18:42.850193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.942 Write completed with error (sct=0, sc=8) 00:24:34.942 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 [2024-11-19 11:18:42.852070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.943 NVMe io qpair process completion error 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 [2024-11-19 11:18:42.853233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 [2024-11-19 11:18:42.854267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 [2024-11-19 11:18:42.855205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.943 starting I/O failed: -6 00:24:34.943 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 [2024-11-19 11:18:42.857987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.944 NVMe io qpair process completion error 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 [2024-11-19 11:18:42.859024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.944 starting I/O failed: -6 00:24:34.944 starting I/O failed: -6 00:24:34.944 starting I/O failed: -6 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 [2024-11-19 11:18:42.860029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.944 Write completed with error (sct=0, sc=8) 00:24:34.944 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 [2024-11-19 11:18:42.860958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.945 starting I/O failed: -6 00:24:34.945 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 [2024-11-19 11:18:42.862400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.946 NVMe io qpair process completion error 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 [2024-11-19 11:18:42.863491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 [2024-11-19 11:18:42.864409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 starting I/O failed: -6 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.946 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 [2024-11-19 11:18:42.865322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 [2024-11-19 11:18:42.868045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.947 NVMe io qpair process completion error 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 [2024-11-19 11:18:42.869599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.947 starting I/O failed: -6 00:24:34.947 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 [2024-11-19 11:18:42.870413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 starting I/O failed: -6 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 [2024-11-19 11:18:42.871381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.948 NVMe io qpair process completion error 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.948 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 [2024-11-19 11:18:42.875318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.949 starting I/O failed: -6 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.949 starting I/O failed: -6 00:24:34.949 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 starting I/O failed: -6 00:24:34.950 [2024-11-19 11:18:42.878574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.950 NVMe io qpair process completion error 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Write completed with error (sct=0, sc=8) 00:24:34.950 Initializing NVMe Controllers 00:24:34.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.950 Controller IO queue size 128, less than required. 00:24:34.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:34.950 Controller IO queue size 128, less than required. 00:24:34.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:34.950 Controller IO queue size 128, less than required. 00:24:34.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:34.951 Controller IO queue size 128, less than required. 00:24:34.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:34.951 Controller IO queue size 128, less than required. 00:24:34.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:34.951 Controller IO queue size 128, less than required. 00:24:34.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:34.951 Controller IO queue size 128, less than required. 00:24:34.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:34.951 Controller IO queue size 128, less than required. 00:24:34.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:34.951 Controller IO queue size 128, less than required. 00:24:34.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:34.951 Controller IO queue size 128, less than required. 00:24:34.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:34.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:34.951 Initialization complete. Launching workers. 00:24:34.951 ======================================================== 00:24:34.951 Latency(us) 00:24:34.951 Device Information : IOPS MiB/s Average min max 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1871.04 80.40 68434.36 507.92 123385.56 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1868.87 80.30 68549.66 651.48 125745.79 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1854.59 79.69 69098.16 927.26 128041.88 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1880.56 80.81 68238.29 669.81 130920.23 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1850.48 79.51 69220.55 642.94 131076.34 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1902.41 81.74 67371.30 780.87 130566.53 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1880.56 80.81 67518.58 844.92 124891.87 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1872.77 80.47 67820.77 614.08 123413.96 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1883.37 80.93 67470.89 528.54 125540.59 00:24:34.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1880.99 80.82 67586.24 555.70 125938.48 00:24:34.951 ======================================================== 00:24:34.951 Total : 18745.64 805.48 68126.51 507.92 131076.34 00:24:34.951 00:24:34.951 [2024-11-19 11:18:42.885439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9360 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f7390 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f79f0 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f89e0 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f76c0 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f86b0 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f8380 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f7060 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f9540 is same with the state(6) to be set 00:24:34.951 [2024-11-19 11:18:42.885713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f8050 is same with the state(6) to be set 00:24:34.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:34.951 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:35.896 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 16368 00:24:35.896 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:35.896 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 16368 00:24:35.896 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:35.896 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.896 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:35.896 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 16368 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.897 rmmod nvme_tcp 00:24:35.897 rmmod nvme_fabrics 00:24:35.897 rmmod nvme_keyring 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 15986 ']' 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 15986 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 15986 ']' 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 15986 00:24:35.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (15986) - No such process 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 15986 is not found' 00:24:35.897 Process with pid 15986 is not found 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.897 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.448 00:24:38.448 real 0m10.328s 00:24:38.448 user 0m28.006s 00:24:38.448 sys 0m3.988s 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:38.448 ************************************ 00:24:38.448 END TEST nvmf_shutdown_tc4 00:24:38.448 ************************************ 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:38.448 00:24:38.448 real 0m44.838s 00:24:38.448 user 1m46.559s 00:24:38.448 sys 0m14.664s 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:38.448 ************************************ 00:24:38.448 END TEST nvmf_shutdown 00:24:38.448 ************************************ 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:38.448 ************************************ 00:24:38.448 START TEST nvmf_nsid 00:24:38.448 ************************************ 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:38.448 * Looking for test storage... 00:24:38.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.448 --rc genhtml_branch_coverage=1 00:24:38.448 --rc genhtml_function_coverage=1 00:24:38.448 --rc genhtml_legend=1 00:24:38.448 --rc geninfo_all_blocks=1 00:24:38.448 --rc geninfo_unexecuted_blocks=1 00:24:38.448 00:24:38.448 ' 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.448 --rc genhtml_branch_coverage=1 00:24:38.448 --rc genhtml_function_coverage=1 00:24:38.448 --rc genhtml_legend=1 00:24:38.448 --rc geninfo_all_blocks=1 00:24:38.448 --rc geninfo_unexecuted_blocks=1 00:24:38.448 00:24:38.448 ' 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.448 --rc genhtml_branch_coverage=1 00:24:38.448 --rc genhtml_function_coverage=1 00:24:38.448 --rc genhtml_legend=1 00:24:38.448 --rc geninfo_all_blocks=1 00:24:38.448 --rc geninfo_unexecuted_blocks=1 00:24:38.448 00:24:38.448 ' 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.448 --rc genhtml_branch_coverage=1 00:24:38.448 --rc genhtml_function_coverage=1 00:24:38.448 --rc genhtml_legend=1 00:24:38.448 --rc geninfo_all_blocks=1 00:24:38.448 --rc geninfo_unexecuted_blocks=1 00:24:38.448 00:24:38.448 ' 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.448 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:38.449 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:46.595 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:46.595 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:46.595 Found net devices under 0000:31:00.0: cvl_0_0 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:46.595 Found net devices under 0000:31:00.1: cvl_0_1 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.595 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.596 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:24:46.858 00:24:46.858 --- 10.0.0.2 ping statistics --- 00:24:46.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.858 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:24:46.858 00:24:46.858 --- 10.0.0.1 ping statistics --- 00:24:46.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.858 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.858 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=22301 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 22301 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 22301 ']' 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.858 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:46.858 [2024-11-19 11:18:55.082238] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:46.858 [2024-11-19 11:18:55.082309] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.858 [2024-11-19 11:18:55.174086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.120 [2024-11-19 11:18:55.214606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.121 [2024-11-19 11:18:55.214643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.121 [2024-11-19 11:18:55.214651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.121 [2024-11-19 11:18:55.214658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.121 [2024-11-19 11:18:55.214664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.121 [2024-11-19 11:18:55.215316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=22425 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=4c40a710-bef8-4be4-87aa-eb4fecdf0767 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1372375d-6f88-47de-aaaa-184acddf9616 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c446236e-1a89-4d02-abfc-90a19ec8958f 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.693 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:47.693 null0 00:24:47.693 null1 00:24:47.693 null2 00:24:47.693 [2024-11-19 11:18:55.957577] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:24:47.693 [2024-11-19 11:18:55.957628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid22425 ] 00:24:47.693 [2024-11-19 11:18:55.960803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.693 [2024-11-19 11:18:55.985000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.693 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.693 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 22425 /var/tmp/tgt2.sock 00:24:47.693 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 22425 ']' 00:24:47.693 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:47.693 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.693 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:47.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:47.693 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.694 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:47.954 [2024-11-19 11:18:56.051723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.954 [2024-11-19 11:18:56.087716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.954 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.954 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:47.954 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:48.217 [2024-11-19 11:18:56.558741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.477 [2024-11-19 11:18:56.574871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:48.477 nvme0n1 nvme0n2 00:24:48.477 nvme1n1 00:24:48.477 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:48.477 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:48.478 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:49.866 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:49.866 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:49.867 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 4c40a710-bef8-4be4-87aa-eb4fecdf0767 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:50.811 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4c40a710bef84be487aaeb4fecdf0767 00:24:50.812 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4C40A710BEF84BE487AAEB4FECDF0767 00:24:50.812 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 4C40A710BEF84BE487AAEB4FECDF0767 == \4\C\4\0\A\7\1\0\B\E\F\8\4\B\E\4\8\7\A\A\E\B\4\F\E\C\D\F\0\7\6\7 ]] 00:24:50.812 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:50.812 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:50.812 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:50.812 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1372375d-6f88-47de-aaaa-184acddf9616 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1372375d6f8847deaaaa184acddf9616 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1372375D6F8847DEAAAA184ACDDF9616 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1372375D6F8847DEAAAA184ACDDF9616 == \1\3\7\2\3\7\5\D\6\F\8\8\4\7\D\E\A\A\A\A\1\8\4\A\C\D\D\F\9\6\1\6 ]] 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:51.089 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c446236e-1a89-4d02-abfc-90a19ec8958f 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c446236e1a894d02abfc90a19ec8958f 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C446236E1A894D02ABFC90A19EC8958F 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C446236E1A894D02ABFC90A19EC8958F == \C\4\4\6\2\3\6\E\1\A\8\9\4\D\0\2\A\B\F\C\9\0\A\1\9\E\C\8\9\5\8\F ]] 00:24:51.090 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 22425 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 22425 ']' 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 22425 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 22425 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 22425' 00:24:51.355 killing process with pid 22425 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 22425 00:24:51.355 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 22425 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.617 rmmod nvme_tcp 00:24:51.617 rmmod nvme_fabrics 00:24:51.617 rmmod nvme_keyring 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 22301 ']' 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 22301 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 22301 ']' 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 22301 00:24:51.617 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:51.618 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.618 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 22301 00:24:51.618 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:51.618 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:51.618 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 22301' 00:24:51.618 killing process with pid 22301 00:24:51.618 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 22301 00:24:51.618 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 22301 00:24:51.880 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.880 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.880 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.880 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:51.880 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:51.880 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.881 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.881 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.881 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.881 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.881 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.881 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.798 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.798 00:24:53.798 real 0m15.737s 00:24:53.798 user 0m11.442s 00:24:53.798 sys 0m7.463s 00:24:53.798 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.798 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:53.798 ************************************ 00:24:53.798 END TEST nvmf_nsid 00:24:53.798 ************************************ 00:24:54.060 11:19:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:54.060 00:24:54.060 real 13m25.363s 00:24:54.060 user 27m20.567s 00:24:54.060 sys 4m8.535s 00:24:54.060 11:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.060 11:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:54.060 ************************************ 00:24:54.060 END TEST nvmf_target_extra 00:24:54.060 ************************************ 00:24:54.060 11:19:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:54.060 11:19:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:54.060 11:19:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.060 11:19:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.060 ************************************ 00:24:54.060 START TEST nvmf_host 00:24:54.060 ************************************ 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:54.060 * Looking for test storage... 00:24:54.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:54.060 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:54.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.323 --rc genhtml_branch_coverage=1 00:24:54.323 --rc genhtml_function_coverage=1 00:24:54.323 --rc genhtml_legend=1 00:24:54.323 --rc geninfo_all_blocks=1 00:24:54.323 --rc geninfo_unexecuted_blocks=1 00:24:54.323 00:24:54.323 ' 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:54.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.323 --rc genhtml_branch_coverage=1 00:24:54.323 --rc genhtml_function_coverage=1 00:24:54.323 --rc genhtml_legend=1 00:24:54.323 --rc geninfo_all_blocks=1 00:24:54.323 --rc geninfo_unexecuted_blocks=1 00:24:54.323 00:24:54.323 ' 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:54.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.323 --rc genhtml_branch_coverage=1 00:24:54.323 --rc genhtml_function_coverage=1 00:24:54.323 --rc genhtml_legend=1 00:24:54.323 --rc geninfo_all_blocks=1 00:24:54.323 --rc geninfo_unexecuted_blocks=1 00:24:54.323 00:24:54.323 ' 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:54.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.323 --rc genhtml_branch_coverage=1 00:24:54.323 --rc genhtml_function_coverage=1 00:24:54.323 --rc genhtml_legend=1 00:24:54.323 --rc geninfo_all_blocks=1 00:24:54.323 --rc geninfo_unexecuted_blocks=1 00:24:54.323 00:24:54.323 ' 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.323 11:19:02 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.324 ************************************ 00:24:54.324 START TEST nvmf_multicontroller 00:24:54.324 ************************************ 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:54.324 * Looking for test storage... 00:24:54.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:54.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.324 --rc genhtml_branch_coverage=1 00:24:54.324 --rc genhtml_function_coverage=1 00:24:54.324 --rc genhtml_legend=1 00:24:54.324 --rc geninfo_all_blocks=1 00:24:54.324 --rc geninfo_unexecuted_blocks=1 00:24:54.324 00:24:54.324 ' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:54.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.324 --rc genhtml_branch_coverage=1 00:24:54.324 --rc genhtml_function_coverage=1 00:24:54.324 --rc genhtml_legend=1 00:24:54.324 --rc geninfo_all_blocks=1 00:24:54.324 --rc geninfo_unexecuted_blocks=1 00:24:54.324 00:24:54.324 ' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:54.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.324 --rc genhtml_branch_coverage=1 00:24:54.324 --rc genhtml_function_coverage=1 00:24:54.324 --rc genhtml_legend=1 00:24:54.324 --rc geninfo_all_blocks=1 00:24:54.324 --rc geninfo_unexecuted_blocks=1 00:24:54.324 00:24:54.324 ' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:54.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.324 --rc genhtml_branch_coverage=1 00:24:54.324 --rc genhtml_function_coverage=1 00:24:54.324 --rc genhtml_legend=1 00:24:54.324 --rc geninfo_all_blocks=1 00:24:54.324 --rc geninfo_unexecuted_blocks=1 00:24:54.324 00:24:54.324 ' 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.324 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:54.588 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:54.589 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.589 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.589 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.589 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:54.589 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:54.589 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.589 11:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:02.740 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.740 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:02.740 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:02.741 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:02.741 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:02.741 Found net devices under 0000:31:00.0: cvl_0_0 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:02.741 Found net devices under 0000:31:00.1: cvl_0_1 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:02.741 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:02.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:25:02.742 00:25:02.742 --- 10.0.0.2 ping statistics --- 00:25:02.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.742 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:25:02.742 00:25:02.742 --- 10.0.0.1 ping statistics --- 00:25:02.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.742 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=28119 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 28119 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 28119 ']' 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.742 11:19:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:02.742 [2024-11-19 11:19:11.027288] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:25:02.742 [2024-11-19 11:19:11.027340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.005 [2024-11-19 11:19:11.133740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:03.005 [2024-11-19 11:19:11.180679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.005 [2024-11-19 11:19:11.180730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.005 [2024-11-19 11:19:11.180738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.005 [2024-11-19 11:19:11.180745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.005 [2024-11-19 11:19:11.180752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.005 [2024-11-19 11:19:11.182581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.005 [2024-11-19 11:19:11.182765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.005 [2024-11-19 11:19:11.182767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.578 [2024-11-19 11:19:11.881756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.578 Malloc0 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.578 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.840 [2024-11-19 11:19:11.938229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.840 [2024-11-19 11:19:11.946153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:03.840 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.841 Malloc1 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=28226 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 28226 /var/tmp/bdevperf.sock 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 28226 ']' 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:03.841 11:19:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:04.784 11:19:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.784 11:19:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:04.784 11:19:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:04.784 11:19:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.784 11:19:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:04.784 NVMe0n1 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.784 1 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:04.784 request: 00:25:04.784 { 00:25:04.784 "name": "NVMe0", 00:25:04.784 "trtype": "tcp", 00:25:04.784 "traddr": "10.0.0.2", 00:25:04.784 "adrfam": "ipv4", 00:25:04.784 "trsvcid": "4420", 00:25:04.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.784 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:04.784 "hostaddr": "10.0.0.1", 00:25:04.784 "prchk_reftag": false, 00:25:04.784 "prchk_guard": false, 00:25:04.784 "hdgst": false, 00:25:04.784 "ddgst": false, 00:25:04.784 "allow_unrecognized_csi": false, 00:25:04.784 "method": "bdev_nvme_attach_controller", 00:25:04.784 "req_id": 1 00:25:04.784 } 00:25:04.784 Got JSON-RPC error response 00:25:04.784 response: 00:25:04.784 { 00:25:04.784 "code": -114, 00:25:04.784 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:04.784 } 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:04.784 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:04.785 request: 00:25:04.785 { 00:25:04.785 "name": "NVMe0", 00:25:04.785 "trtype": "tcp", 00:25:04.785 "traddr": "10.0.0.2", 00:25:04.785 "adrfam": "ipv4", 00:25:04.785 "trsvcid": "4420", 00:25:04.785 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:04.785 "hostaddr": "10.0.0.1", 00:25:04.785 "prchk_reftag": false, 00:25:04.785 "prchk_guard": false, 00:25:04.785 "hdgst": false, 00:25:04.785 "ddgst": false, 00:25:04.785 "allow_unrecognized_csi": false, 00:25:04.785 "method": "bdev_nvme_attach_controller", 00:25:04.785 "req_id": 1 00:25:04.785 } 00:25:04.785 Got JSON-RPC error response 00:25:04.785 response: 00:25:04.785 { 00:25:04.785 "code": -114, 00:25:04.785 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:04.785 } 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:04.785 request: 00:25:04.785 { 00:25:04.785 "name": "NVMe0", 00:25:04.785 "trtype": "tcp", 00:25:04.785 "traddr": "10.0.0.2", 00:25:04.785 "adrfam": "ipv4", 00:25:04.785 "trsvcid": "4420", 00:25:04.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.785 "hostaddr": "10.0.0.1", 00:25:04.785 "prchk_reftag": false, 00:25:04.785 "prchk_guard": false, 00:25:04.785 "hdgst": false, 00:25:04.785 "ddgst": false, 00:25:04.785 "multipath": "disable", 00:25:04.785 "allow_unrecognized_csi": false, 00:25:04.785 "method": "bdev_nvme_attach_controller", 00:25:04.785 "req_id": 1 00:25:04.785 } 00:25:04.785 Got JSON-RPC error response 00:25:04.785 response: 00:25:04.785 { 00:25:04.785 "code": -114, 00:25:04.785 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:04.785 } 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.785 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.046 request: 00:25:05.046 { 00:25:05.046 "name": "NVMe0", 00:25:05.046 "trtype": "tcp", 00:25:05.046 "traddr": "10.0.0.2", 00:25:05.046 "adrfam": "ipv4", 00:25:05.046 "trsvcid": "4420", 00:25:05.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.046 "hostaddr": "10.0.0.1", 00:25:05.046 "prchk_reftag": false, 00:25:05.046 "prchk_guard": false, 00:25:05.046 "hdgst": false, 00:25:05.046 "ddgst": false, 00:25:05.046 "multipath": "failover", 00:25:05.046 "allow_unrecognized_csi": false, 00:25:05.046 "method": "bdev_nvme_attach_controller", 00:25:05.046 "req_id": 1 00:25:05.046 } 00:25:05.046 Got JSON-RPC error response 00:25:05.046 response: 00:25:05.046 { 00:25:05.046 "code": -114, 00:25:05.046 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:05.046 } 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.046 NVMe0n1 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.046 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.307 00:25:05.307 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.307 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:05.307 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:05.307 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.307 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.308 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.308 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:05.308 11:19:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.695 { 00:25:06.695 "results": [ 00:25:06.695 { 00:25:06.695 "job": "NVMe0n1", 00:25:06.695 "core_mask": "0x1", 00:25:06.695 "workload": "write", 00:25:06.695 "status": "finished", 00:25:06.695 "queue_depth": 128, 00:25:06.695 "io_size": 4096, 00:25:06.695 "runtime": 1.006152, 00:25:06.695 "iops": 23364.263053693678, 00:25:06.695 "mibps": 91.26665255349093, 00:25:06.695 "io_failed": 0, 00:25:06.695 "io_timeout": 0, 00:25:06.695 "avg_latency_us": 5462.322427542396, 00:25:06.695 "min_latency_us": 2075.306666666667, 00:25:06.695 "max_latency_us": 15073.28 00:25:06.695 } 00:25:06.695 ], 00:25:06.695 "core_count": 1 00:25:06.695 } 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 28226 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 28226 ']' 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 28226 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 28226 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 28226' 00:25:06.695 killing process with pid 28226 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 28226 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 28226 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:25:06.695 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:06.695 [2024-11-19 11:19:12.047265] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:25:06.695 [2024-11-19 11:19:12.047320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid28226 ] 00:25:06.695 [2024-11-19 11:19:12.126061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.695 [2024-11-19 11:19:12.163927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.695 [2024-11-19 11:19:13.585080] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name b9896493-4be0-4cf2-be18-42f6aac2c534 already exists 00:25:06.695 [2024-11-19 11:19:13.585111] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:b9896493-4be0-4cf2-be18-42f6aac2c534 alias for bdev NVMe1n1 00:25:06.695 [2024-11-19 11:19:13.585120] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:06.695 Running I/O for 1 seconds... 00:25:06.695 23332.00 IOPS, 91.14 MiB/s 00:25:06.695 Latency(us) 00:25:06.695 [2024-11-19T10:19:15.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.695 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:06.695 NVMe0n1 : 1.01 23364.26 91.27 0.00 0.00 5462.32 2075.31 15073.28 00:25:06.695 [2024-11-19T10:19:15.047Z] =================================================================================================================== 00:25:06.695 [2024-11-19T10:19:15.047Z] Total : 23364.26 91.27 0.00 0.00 5462.32 2075.31 15073.28 00:25:06.695 Received shutdown signal, test time was about 1.000000 seconds 00:25:06.695 00:25:06.695 Latency(us) 00:25:06.695 [2024-11-19T10:19:15.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.695 [2024-11-19T10:19:15.047Z] =================================================================================================================== 00:25:06.695 [2024-11-19T10:19:15.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.695 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.695 11:19:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.695 rmmod nvme_tcp 00:25:06.695 rmmod nvme_fabrics 00:25:06.695 rmmod nvme_keyring 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 28119 ']' 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 28119 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 28119 ']' 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 28119 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:06.695 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.956 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 28119 00:25:06.956 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:06.956 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 28119' 00:25:06.957 killing process with pid 28119 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 28119 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 28119 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.957 11:19:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:09.507 00:25:09.507 real 0m14.826s 00:25:09.507 user 0m17.843s 00:25:09.507 sys 0m6.899s 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.507 ************************************ 00:25:09.507 END TEST nvmf_multicontroller 00:25:09.507 ************************************ 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.507 ************************************ 00:25:09.507 START TEST nvmf_aer 00:25:09.507 ************************************ 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:09.507 * Looking for test storage... 00:25:09.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:09.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.507 --rc genhtml_branch_coverage=1 00:25:09.507 --rc genhtml_function_coverage=1 00:25:09.507 --rc genhtml_legend=1 00:25:09.507 --rc geninfo_all_blocks=1 00:25:09.507 --rc geninfo_unexecuted_blocks=1 00:25:09.507 00:25:09.507 ' 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:09.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.507 --rc genhtml_branch_coverage=1 00:25:09.507 --rc genhtml_function_coverage=1 00:25:09.507 --rc genhtml_legend=1 00:25:09.507 --rc geninfo_all_blocks=1 00:25:09.507 --rc geninfo_unexecuted_blocks=1 00:25:09.507 00:25:09.507 ' 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:09.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.507 --rc genhtml_branch_coverage=1 00:25:09.507 --rc genhtml_function_coverage=1 00:25:09.507 --rc genhtml_legend=1 00:25:09.507 --rc geninfo_all_blocks=1 00:25:09.507 --rc geninfo_unexecuted_blocks=1 00:25:09.507 00:25:09.507 ' 00:25:09.507 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:09.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.507 --rc genhtml_branch_coverage=1 00:25:09.507 --rc genhtml_function_coverage=1 00:25:09.507 --rc genhtml_legend=1 00:25:09.507 --rc geninfo_all_blocks=1 00:25:09.507 --rc geninfo_unexecuted_blocks=1 00:25:09.507 00:25:09.508 ' 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.508 11:19:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:17.662 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:17.662 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:17.662 Found net devices under 0000:31:00.0: cvl_0_0 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.662 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:17.663 Found net devices under 0000:31:00.1: cvl_0_1 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:25:17.663 00:25:17.663 --- 10.0.0.2 ping statistics --- 00:25:17.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.663 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:25:17.663 00:25:17.663 --- 10.0.0.1 ping statistics --- 00:25:17.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.663 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=33594 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 33594 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 33594 ']' 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.925 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 [2024-11-19 11:19:26.100621] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:25:17.925 [2024-11-19 11:19:26.100688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.925 [2024-11-19 11:19:26.193202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.926 [2024-11-19 11:19:26.234874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.926 [2024-11-19 11:19:26.234910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.926 [2024-11-19 11:19:26.234918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.926 [2024-11-19 11:19:26.234925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.926 [2024-11-19 11:19:26.234930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.926 [2024-11-19 11:19:26.236543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.926 [2024-11-19 11:19:26.236660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.926 [2024-11-19 11:19:26.236816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.926 [2024-11-19 11:19:26.236818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:18.871 [2024-11-19 11:19:26.947505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:18.871 Malloc0 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.871 11:19:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:18.871 [2024-11-19 11:19:27.017196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:18.871 [ 00:25:18.871 { 00:25:18.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:18.871 "subtype": "Discovery", 00:25:18.871 "listen_addresses": [], 00:25:18.871 "allow_any_host": true, 00:25:18.871 "hosts": [] 00:25:18.871 }, 00:25:18.871 { 00:25:18.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.871 "subtype": "NVMe", 00:25:18.871 "listen_addresses": [ 00:25:18.871 { 00:25:18.871 "trtype": "TCP", 00:25:18.871 "adrfam": "IPv4", 00:25:18.871 "traddr": "10.0.0.2", 00:25:18.871 "trsvcid": "4420" 00:25:18.871 } 00:25:18.871 ], 00:25:18.871 "allow_any_host": true, 00:25:18.871 "hosts": [], 00:25:18.871 "serial_number": "SPDK00000000000001", 00:25:18.871 "model_number": "SPDK bdev Controller", 00:25:18.871 "max_namespaces": 2, 00:25:18.871 "min_cntlid": 1, 00:25:18.871 "max_cntlid": 65519, 00:25:18.871 "namespaces": [ 00:25:18.871 { 00:25:18.871 "nsid": 1, 00:25:18.871 "bdev_name": "Malloc0", 00:25:18.871 "name": "Malloc0", 00:25:18.871 "nguid": "EA1DBC09823F456C9805EF9C377A74E7", 00:25:18.871 "uuid": "ea1dbc09-823f-456c-9805-ef9c377a74e7" 00:25:18.871 } 00:25:18.871 ] 00:25:18.871 } 00:25:18.871 ] 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=33943 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:25:18.871 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:19.133 Malloc1 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:19.133 Asynchronous Event Request test 00:25:19.133 Attaching to 10.0.0.2 00:25:19.133 Attached to 10.0.0.2 00:25:19.133 Registering asynchronous event callbacks... 00:25:19.133 Starting namespace attribute notice tests for all controllers... 00:25:19.133 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:19.133 aer_cb - Changed Namespace 00:25:19.133 Cleaning up... 00:25:19.133 [ 00:25:19.133 { 00:25:19.133 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:19.133 "subtype": "Discovery", 00:25:19.133 "listen_addresses": [], 00:25:19.133 "allow_any_host": true, 00:25:19.133 "hosts": [] 00:25:19.133 }, 00:25:19.133 { 00:25:19.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.133 "subtype": "NVMe", 00:25:19.133 "listen_addresses": [ 00:25:19.133 { 00:25:19.133 "trtype": "TCP", 00:25:19.133 "adrfam": "IPv4", 00:25:19.133 "traddr": "10.0.0.2", 00:25:19.133 "trsvcid": "4420" 00:25:19.133 } 00:25:19.133 ], 00:25:19.133 "allow_any_host": true, 00:25:19.133 "hosts": [], 00:25:19.133 "serial_number": "SPDK00000000000001", 00:25:19.133 "model_number": "SPDK bdev Controller", 00:25:19.133 "max_namespaces": 2, 00:25:19.133 "min_cntlid": 1, 00:25:19.133 "max_cntlid": 65519, 00:25:19.133 "namespaces": [ 00:25:19.133 { 00:25:19.133 "nsid": 1, 00:25:19.133 "bdev_name": "Malloc0", 00:25:19.133 "name": "Malloc0", 00:25:19.133 "nguid": "EA1DBC09823F456C9805EF9C377A74E7", 00:25:19.133 "uuid": "ea1dbc09-823f-456c-9805-ef9c377a74e7" 00:25:19.133 }, 00:25:19.133 { 00:25:19.133 "nsid": 2, 00:25:19.133 "bdev_name": "Malloc1", 00:25:19.133 "name": "Malloc1", 00:25:19.133 "nguid": "AC4FCBD4DB434F119C4C4F7F347BBD47", 00:25:19.133 "uuid": "ac4fcbd4-db43-4f11-9c4c-4f7f347bbd47" 00:25:19.133 } 00:25:19.133 ] 00:25:19.133 } 00:25:19.133 ] 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 33943 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.133 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.134 rmmod nvme_tcp 00:25:19.134 rmmod nvme_fabrics 00:25:19.134 rmmod nvme_keyring 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 33594 ']' 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 33594 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 33594 ']' 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 33594 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.134 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 33594 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 33594' 00:25:19.396 killing process with pid 33594 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 33594 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 33594 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.396 11:19:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.944 11:19:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.944 00:25:21.944 real 0m12.313s 00:25:21.944 user 0m8.168s 00:25:21.944 sys 0m6.713s 00:25:21.944 11:19:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.944 11:19:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.944 ************************************ 00:25:21.944 END TEST nvmf_aer 00:25:21.944 ************************************ 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.945 ************************************ 00:25:21.945 START TEST nvmf_async_init 00:25:21.945 ************************************ 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:21.945 * Looking for test storage... 00:25:21.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:21.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.945 --rc genhtml_branch_coverage=1 00:25:21.945 --rc genhtml_function_coverage=1 00:25:21.945 --rc genhtml_legend=1 00:25:21.945 --rc geninfo_all_blocks=1 00:25:21.945 --rc geninfo_unexecuted_blocks=1 00:25:21.945 00:25:21.945 ' 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:21.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.945 --rc genhtml_branch_coverage=1 00:25:21.945 --rc genhtml_function_coverage=1 00:25:21.945 --rc genhtml_legend=1 00:25:21.945 --rc geninfo_all_blocks=1 00:25:21.945 --rc geninfo_unexecuted_blocks=1 00:25:21.945 00:25:21.945 ' 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:21.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.945 --rc genhtml_branch_coverage=1 00:25:21.945 --rc genhtml_function_coverage=1 00:25:21.945 --rc genhtml_legend=1 00:25:21.945 --rc geninfo_all_blocks=1 00:25:21.945 --rc geninfo_unexecuted_blocks=1 00:25:21.945 00:25:21.945 ' 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:21.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.945 --rc genhtml_branch_coverage=1 00:25:21.945 --rc genhtml_function_coverage=1 00:25:21.945 --rc genhtml_legend=1 00:25:21.945 --rc geninfo_all_blocks=1 00:25:21.945 --rc geninfo_unexecuted_blocks=1 00:25:21.945 00:25:21.945 ' 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.945 11:19:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.945 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=bfa9ba6cd9ab47a98aaf3f59f38e453a 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.946 11:19:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:30.209 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:30.209 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:30.209 Found net devices under 0000:31:00.0: cvl_0_0 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.209 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:30.210 Found net devices under 0000:31:00.1: cvl_0_1 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:25:30.210 00:25:30.210 --- 10.0.0.2 ping statistics --- 00:25:30.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.210 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:25:30.210 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:25:30.470 00:25:30.470 --- 10.0.0.1 ping statistics --- 00:25:30.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.470 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:25:30.470 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.470 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:30.470 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.470 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.470 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.470 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.470 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.470 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=38677 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 38677 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 38677 ']' 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.471 11:19:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:30.471 [2024-11-19 11:19:38.659857] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:25:30.471 [2024-11-19 11:19:38.659925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.471 [2024-11-19 11:19:38.747194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.471 [2024-11-19 11:19:38.782450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.471 [2024-11-19 11:19:38.782482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.471 [2024-11-19 11:19:38.782490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.471 [2024-11-19 11:19:38.782497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.471 [2024-11-19 11:19:38.782503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.471 [2024-11-19 11:19:38.783074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 [2024-11-19 11:19:39.491246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 null0 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bfa9ba6cd9ab47a98aaf3f59f38e453a 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.412 [2024-11-19 11:19:39.551532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.412 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.673 nvme0n1 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.673 [ 00:25:31.673 { 00:25:31.673 "name": "nvme0n1", 00:25:31.673 "aliases": [ 00:25:31.673 "bfa9ba6c-d9ab-47a9-8aaf-3f59f38e453a" 00:25:31.673 ], 00:25:31.673 "product_name": "NVMe disk", 00:25:31.673 "block_size": 512, 00:25:31.673 "num_blocks": 2097152, 00:25:31.673 "uuid": "bfa9ba6c-d9ab-47a9-8aaf-3f59f38e453a", 00:25:31.673 "numa_id": 0, 00:25:31.673 "assigned_rate_limits": { 00:25:31.673 "rw_ios_per_sec": 0, 00:25:31.673 "rw_mbytes_per_sec": 0, 00:25:31.673 "r_mbytes_per_sec": 0, 00:25:31.673 "w_mbytes_per_sec": 0 00:25:31.673 }, 00:25:31.673 "claimed": false, 00:25:31.673 "zoned": false, 00:25:31.673 "supported_io_types": { 00:25:31.673 "read": true, 00:25:31.673 "write": true, 00:25:31.673 "unmap": false, 00:25:31.673 "flush": true, 00:25:31.673 "reset": true, 00:25:31.673 "nvme_admin": true, 00:25:31.673 "nvme_io": true, 00:25:31.673 "nvme_io_md": false, 00:25:31.673 "write_zeroes": true, 00:25:31.673 "zcopy": false, 00:25:31.673 "get_zone_info": false, 00:25:31.673 "zone_management": false, 00:25:31.673 "zone_append": false, 00:25:31.673 "compare": true, 00:25:31.673 "compare_and_write": true, 00:25:31.673 "abort": true, 00:25:31.673 "seek_hole": false, 00:25:31.673 "seek_data": false, 00:25:31.673 "copy": true, 00:25:31.673 "nvme_iov_md": false 00:25:31.673 }, 00:25:31.673 "memory_domains": [ 00:25:31.673 { 00:25:31.673 "dma_device_id": "system", 00:25:31.673 "dma_device_type": 1 00:25:31.673 } 00:25:31.673 ], 00:25:31.673 "driver_specific": { 00:25:31.673 "nvme": [ 00:25:31.673 { 00:25:31.673 "trid": { 00:25:31.673 "trtype": "TCP", 00:25:31.673 "adrfam": "IPv4", 00:25:31.673 "traddr": "10.0.0.2", 00:25:31.673 "trsvcid": "4420", 00:25:31.673 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:31.673 }, 00:25:31.673 "ctrlr_data": { 00:25:31.673 "cntlid": 1, 00:25:31.673 "vendor_id": "0x8086", 00:25:31.673 "model_number": "SPDK bdev Controller", 00:25:31.673 "serial_number": "00000000000000000000", 00:25:31.673 "firmware_revision": "25.01", 00:25:31.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.673 "oacs": { 00:25:31.673 "security": 0, 00:25:31.673 "format": 0, 00:25:31.673 "firmware": 0, 00:25:31.673 "ns_manage": 0 00:25:31.673 }, 00:25:31.673 "multi_ctrlr": true, 00:25:31.673 "ana_reporting": false 00:25:31.673 }, 00:25:31.673 "vs": { 00:25:31.673 "nvme_version": "1.3" 00:25:31.673 }, 00:25:31.673 "ns_data": { 00:25:31.673 "id": 1, 00:25:31.673 "can_share": true 00:25:31.673 } 00:25:31.673 } 00:25:31.673 ], 00:25:31.673 "mp_policy": "active_passive" 00:25:31.673 } 00:25:31.673 } 00:25:31.673 ] 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.673 [2024-11-19 11:19:39.825723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:31.673 [2024-11-19 11:19:39.825784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e33460 (9): Bad file descriptor 00:25:31.673 [2024-11-19 11:19:39.957960] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.673 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.673 [ 00:25:31.673 { 00:25:31.673 "name": "nvme0n1", 00:25:31.673 "aliases": [ 00:25:31.673 "bfa9ba6c-d9ab-47a9-8aaf-3f59f38e453a" 00:25:31.673 ], 00:25:31.673 "product_name": "NVMe disk", 00:25:31.673 "block_size": 512, 00:25:31.673 "num_blocks": 2097152, 00:25:31.673 "uuid": "bfa9ba6c-d9ab-47a9-8aaf-3f59f38e453a", 00:25:31.673 "numa_id": 0, 00:25:31.673 "assigned_rate_limits": { 00:25:31.673 "rw_ios_per_sec": 0, 00:25:31.673 "rw_mbytes_per_sec": 0, 00:25:31.673 "r_mbytes_per_sec": 0, 00:25:31.673 "w_mbytes_per_sec": 0 00:25:31.673 }, 00:25:31.673 "claimed": false, 00:25:31.674 "zoned": false, 00:25:31.674 "supported_io_types": { 00:25:31.674 "read": true, 00:25:31.674 "write": true, 00:25:31.674 "unmap": false, 00:25:31.674 "flush": true, 00:25:31.674 "reset": true, 00:25:31.674 "nvme_admin": true, 00:25:31.674 "nvme_io": true, 00:25:31.674 "nvme_io_md": false, 00:25:31.674 "write_zeroes": true, 00:25:31.674 "zcopy": false, 00:25:31.674 "get_zone_info": false, 00:25:31.674 "zone_management": false, 00:25:31.674 "zone_append": false, 00:25:31.674 "compare": true, 00:25:31.674 "compare_and_write": true, 00:25:31.674 "abort": true, 00:25:31.674 "seek_hole": false, 00:25:31.674 "seek_data": false, 00:25:31.674 "copy": true, 00:25:31.674 "nvme_iov_md": false 00:25:31.674 }, 00:25:31.674 "memory_domains": [ 00:25:31.674 { 00:25:31.674 "dma_device_id": "system", 00:25:31.674 "dma_device_type": 1 00:25:31.674 } 00:25:31.674 ], 00:25:31.674 "driver_specific": { 00:25:31.674 "nvme": [ 00:25:31.674 { 00:25:31.674 "trid": { 00:25:31.674 "trtype": "TCP", 00:25:31.674 "adrfam": "IPv4", 00:25:31.674 "traddr": "10.0.0.2", 00:25:31.674 "trsvcid": "4420", 00:25:31.674 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:31.674 }, 00:25:31.674 "ctrlr_data": { 00:25:31.674 "cntlid": 2, 00:25:31.674 "vendor_id": "0x8086", 00:25:31.674 "model_number": "SPDK bdev Controller", 00:25:31.674 "serial_number": "00000000000000000000", 00:25:31.674 "firmware_revision": "25.01", 00:25:31.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.674 "oacs": { 00:25:31.674 "security": 0, 00:25:31.674 "format": 0, 00:25:31.674 "firmware": 0, 00:25:31.674 "ns_manage": 0 00:25:31.674 }, 00:25:31.674 "multi_ctrlr": true, 00:25:31.674 "ana_reporting": false 00:25:31.674 }, 00:25:31.674 "vs": { 00:25:31.674 "nvme_version": "1.3" 00:25:31.674 }, 00:25:31.674 "ns_data": { 00:25:31.674 "id": 1, 00:25:31.674 "can_share": true 00:25:31.674 } 00:25:31.674 } 00:25:31.674 ], 00:25:31.674 "mp_policy": "active_passive" 00:25:31.674 } 00:25:31.674 } 00:25:31.674 ] 00:25:31.674 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.674 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.674 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.674 11:19:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.674 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.674 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:31.674 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.j6t4YJYrJE 00:25:31.674 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:31.674 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.j6t4YJYrJE 00:25:31.674 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.j6t4YJYrJE 00:25:31.674 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.674 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.935 [2024-11-19 11:19:40.046754] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:31.935 [2024-11-19 11:19:40.046886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.935 [2024-11-19 11:19:40.070834] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:31.935 nvme0n1 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.935 [ 00:25:31.935 { 00:25:31.935 "name": "nvme0n1", 00:25:31.935 "aliases": [ 00:25:31.935 "bfa9ba6c-d9ab-47a9-8aaf-3f59f38e453a" 00:25:31.935 ], 00:25:31.935 "product_name": "NVMe disk", 00:25:31.935 "block_size": 512, 00:25:31.935 "num_blocks": 2097152, 00:25:31.935 "uuid": "bfa9ba6c-d9ab-47a9-8aaf-3f59f38e453a", 00:25:31.935 "numa_id": 0, 00:25:31.935 "assigned_rate_limits": { 00:25:31.935 "rw_ios_per_sec": 0, 00:25:31.935 "rw_mbytes_per_sec": 0, 00:25:31.935 "r_mbytes_per_sec": 0, 00:25:31.935 "w_mbytes_per_sec": 0 00:25:31.935 }, 00:25:31.935 "claimed": false, 00:25:31.935 "zoned": false, 00:25:31.935 "supported_io_types": { 00:25:31.935 "read": true, 00:25:31.935 "write": true, 00:25:31.935 "unmap": false, 00:25:31.935 "flush": true, 00:25:31.935 "reset": true, 00:25:31.935 "nvme_admin": true, 00:25:31.935 "nvme_io": true, 00:25:31.935 "nvme_io_md": false, 00:25:31.935 "write_zeroes": true, 00:25:31.935 "zcopy": false, 00:25:31.935 "get_zone_info": false, 00:25:31.935 "zone_management": false, 00:25:31.935 "zone_append": false, 00:25:31.935 "compare": true, 00:25:31.935 "compare_and_write": true, 00:25:31.935 "abort": true, 00:25:31.935 "seek_hole": false, 00:25:31.935 "seek_data": false, 00:25:31.935 "copy": true, 00:25:31.935 "nvme_iov_md": false 00:25:31.935 }, 00:25:31.935 "memory_domains": [ 00:25:31.935 { 00:25:31.935 "dma_device_id": "system", 00:25:31.935 "dma_device_type": 1 00:25:31.935 } 00:25:31.935 ], 00:25:31.935 "driver_specific": { 00:25:31.935 "nvme": [ 00:25:31.935 { 00:25:31.935 "trid": { 00:25:31.935 "trtype": "TCP", 00:25:31.935 "adrfam": "IPv4", 00:25:31.935 "traddr": "10.0.0.2", 00:25:31.935 "trsvcid": "4421", 00:25:31.935 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:31.935 }, 00:25:31.935 "ctrlr_data": { 00:25:31.935 "cntlid": 3, 00:25:31.935 "vendor_id": "0x8086", 00:25:31.935 "model_number": "SPDK bdev Controller", 00:25:31.935 "serial_number": "00000000000000000000", 00:25:31.935 "firmware_revision": "25.01", 00:25:31.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.935 "oacs": { 00:25:31.935 "security": 0, 00:25:31.935 "format": 0, 00:25:31.935 "firmware": 0, 00:25:31.935 "ns_manage": 0 00:25:31.935 }, 00:25:31.935 "multi_ctrlr": true, 00:25:31.935 "ana_reporting": false 00:25:31.935 }, 00:25:31.935 "vs": { 00:25:31.935 "nvme_version": "1.3" 00:25:31.935 }, 00:25:31.935 "ns_data": { 00:25:31.935 "id": 1, 00:25:31.935 "can_share": true 00:25:31.935 } 00:25:31.935 } 00:25:31.935 ], 00:25:31.935 "mp_policy": "active_passive" 00:25:31.935 } 00:25:31.935 } 00:25:31.935 ] 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.j6t4YJYrJE 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.935 rmmod nvme_tcp 00:25:31.935 rmmod nvme_fabrics 00:25:31.935 rmmod nvme_keyring 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 38677 ']' 00:25:31.935 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 38677 00:25:31.936 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 38677 ']' 00:25:31.936 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 38677 00:25:31.936 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:31.936 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.936 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38677 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38677' 00:25:32.196 killing process with pid 38677 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 38677 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 38677 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.196 11:19:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.742 11:19:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:34.742 00:25:34.743 real 0m12.729s 00:25:34.743 user 0m4.393s 00:25:34.743 sys 0m6.840s 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.743 ************************************ 00:25:34.743 END TEST nvmf_async_init 00:25:34.743 ************************************ 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.743 ************************************ 00:25:34.743 START TEST dma 00:25:34.743 ************************************ 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:34.743 * Looking for test storage... 00:25:34.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:34.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.743 --rc genhtml_branch_coverage=1 00:25:34.743 --rc genhtml_function_coverage=1 00:25:34.743 --rc genhtml_legend=1 00:25:34.743 --rc geninfo_all_blocks=1 00:25:34.743 --rc geninfo_unexecuted_blocks=1 00:25:34.743 00:25:34.743 ' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:34.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.743 --rc genhtml_branch_coverage=1 00:25:34.743 --rc genhtml_function_coverage=1 00:25:34.743 --rc genhtml_legend=1 00:25:34.743 --rc geninfo_all_blocks=1 00:25:34.743 --rc geninfo_unexecuted_blocks=1 00:25:34.743 00:25:34.743 ' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:34.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.743 --rc genhtml_branch_coverage=1 00:25:34.743 --rc genhtml_function_coverage=1 00:25:34.743 --rc genhtml_legend=1 00:25:34.743 --rc geninfo_all_blocks=1 00:25:34.743 --rc geninfo_unexecuted_blocks=1 00:25:34.743 00:25:34.743 ' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:34.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.743 --rc genhtml_branch_coverage=1 00:25:34.743 --rc genhtml_function_coverage=1 00:25:34.743 --rc genhtml_legend=1 00:25:34.743 --rc geninfo_all_blocks=1 00:25:34.743 --rc geninfo_unexecuted_blocks=1 00:25:34.743 00:25:34.743 ' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.743 11:19:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:34.744 00:25:34.744 real 0m0.219s 00:25:34.744 user 0m0.145s 00:25:34.744 sys 0m0.088s 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:34.744 ************************************ 00:25:34.744 END TEST dma 00:25:34.744 ************************************ 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.744 ************************************ 00:25:34.744 START TEST nvmf_identify 00:25:34.744 ************************************ 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:34.744 * Looking for test storage... 00:25:34.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:34.744 11:19:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:34.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.744 --rc genhtml_branch_coverage=1 00:25:34.744 --rc genhtml_function_coverage=1 00:25:34.744 --rc genhtml_legend=1 00:25:34.744 --rc geninfo_all_blocks=1 00:25:34.744 --rc geninfo_unexecuted_blocks=1 00:25:34.744 00:25:34.744 ' 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:34.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.744 --rc genhtml_branch_coverage=1 00:25:34.744 --rc genhtml_function_coverage=1 00:25:34.744 --rc genhtml_legend=1 00:25:34.744 --rc geninfo_all_blocks=1 00:25:34.744 --rc geninfo_unexecuted_blocks=1 00:25:34.744 00:25:34.744 ' 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:34.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.744 --rc genhtml_branch_coverage=1 00:25:34.744 --rc genhtml_function_coverage=1 00:25:34.744 --rc genhtml_legend=1 00:25:34.744 --rc geninfo_all_blocks=1 00:25:34.744 --rc geninfo_unexecuted_blocks=1 00:25:34.744 00:25:34.744 ' 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:34.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.744 --rc genhtml_branch_coverage=1 00:25:34.744 --rc genhtml_function_coverage=1 00:25:34.744 --rc genhtml_legend=1 00:25:34.744 --rc geninfo_all_blocks=1 00:25:34.744 --rc geninfo_unexecuted_blocks=1 00:25:34.744 00:25:34.744 ' 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.744 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.745 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.745 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.745 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.745 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.745 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.745 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.006 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.007 11:19:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:43.183 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:43.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:43.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:43.184 Found net devices under 0000:31:00.0: cvl_0_0 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:43.184 Found net devices under 0000:31:00.1: cvl_0_1 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:43.184 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:43.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:25:43.445 00:25:43.445 --- 10.0.0.2 ping statistics --- 00:25:43.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.445 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:25:43.445 00:25:43.445 --- 10.0.0.1 ping statistics --- 00:25:43.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.445 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=43949 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 43949 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 43949 ']' 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.445 11:19:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:43.445 [2024-11-19 11:19:51.697692] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:25:43.445 [2024-11-19 11:19:51.697756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.445 [2024-11-19 11:19:51.795569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.705 [2024-11-19 11:19:51.837902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.705 [2024-11-19 11:19:51.837941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.705 [2024-11-19 11:19:51.837949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.705 [2024-11-19 11:19:51.837956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.705 [2024-11-19 11:19:51.837962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.705 [2024-11-19 11:19:51.839727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.705 [2024-11-19 11:19:51.839821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.705 [2024-11-19 11:19:51.839957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.705 [2024-11-19 11:19:51.839958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:44.276 [2024-11-19 11:19:52.514678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:44.276 Malloc0 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.276 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:44.538 [2024-11-19 11:19:52.628211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:44.538 [ 00:25:44.538 { 00:25:44.538 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:44.538 "subtype": "Discovery", 00:25:44.538 "listen_addresses": [ 00:25:44.538 { 00:25:44.538 "trtype": "TCP", 00:25:44.538 "adrfam": "IPv4", 00:25:44.538 "traddr": "10.0.0.2", 00:25:44.538 "trsvcid": "4420" 00:25:44.538 } 00:25:44.538 ], 00:25:44.538 "allow_any_host": true, 00:25:44.538 "hosts": [] 00:25:44.538 }, 00:25:44.538 { 00:25:44.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.538 "subtype": "NVMe", 00:25:44.538 "listen_addresses": [ 00:25:44.538 { 00:25:44.538 "trtype": "TCP", 00:25:44.538 "adrfam": "IPv4", 00:25:44.538 "traddr": "10.0.0.2", 00:25:44.538 "trsvcid": "4420" 00:25:44.538 } 00:25:44.538 ], 00:25:44.538 "allow_any_host": true, 00:25:44.538 "hosts": [], 00:25:44.538 "serial_number": "SPDK00000000000001", 00:25:44.538 "model_number": "SPDK bdev Controller", 00:25:44.538 "max_namespaces": 32, 00:25:44.538 "min_cntlid": 1, 00:25:44.538 "max_cntlid": 65519, 00:25:44.538 "namespaces": [ 00:25:44.538 { 00:25:44.538 "nsid": 1, 00:25:44.538 "bdev_name": "Malloc0", 00:25:44.538 "name": "Malloc0", 00:25:44.538 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:44.538 "eui64": "ABCDEF0123456789", 00:25:44.538 "uuid": "c88d42e4-4744-407d-a9e8-763af88d61bd" 00:25:44.538 } 00:25:44.538 ] 00:25:44.538 } 00:25:44.538 ] 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.538 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:44.538 [2024-11-19 11:19:52.691492] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:25:44.538 [2024-11-19 11:19:52.691532] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44097 ] 00:25:44.538 [2024-11-19 11:19:52.743987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:44.538 [2024-11-19 11:19:52.744046] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:44.538 [2024-11-19 11:19:52.744051] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:44.538 [2024-11-19 11:19:52.744062] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:44.538 [2024-11-19 11:19:52.744072] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:44.538 [2024-11-19 11:19:52.748169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:44.538 [2024-11-19 11:19:52.748201] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x111f550 0 00:25:44.538 [2024-11-19 11:19:52.755875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:44.538 [2024-11-19 11:19:52.755888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:44.538 [2024-11-19 11:19:52.755893] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:44.538 [2024-11-19 11:19:52.755896] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:44.538 [2024-11-19 11:19:52.755929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.538 [2024-11-19 11:19:52.755934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.538 [2024-11-19 11:19:52.755942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.538 [2024-11-19 11:19:52.755955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:44.538 [2024-11-19 11:19:52.755972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.538 [2024-11-19 11:19:52.763874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.538 [2024-11-19 11:19:52.763883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.538 [2024-11-19 11:19:52.763887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.538 [2024-11-19 11:19:52.763891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.538 [2024-11-19 11:19:52.763903] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:44.538 [2024-11-19 11:19:52.763910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:44.538 [2024-11-19 11:19:52.763916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:44.538 [2024-11-19 11:19:52.763929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.538 [2024-11-19 11:19:52.763933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.538 [2024-11-19 11:19:52.763937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.538 [2024-11-19 11:19:52.763945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.538 [2024-11-19 11:19:52.763958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.538 [2024-11-19 11:19:52.764180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.539 [2024-11-19 11:19:52.764187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.539 [2024-11-19 11:19:52.764191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.539 [2024-11-19 11:19:52.764200] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:44.539 [2024-11-19 11:19:52.764207] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:44.539 [2024-11-19 11:19:52.764214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.539 [2024-11-19 11:19:52.764228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.539 [2024-11-19 11:19:52.764239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.539 [2024-11-19 11:19:52.764418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.539 [2024-11-19 11:19:52.764425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.539 [2024-11-19 11:19:52.764428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.539 [2024-11-19 11:19:52.764437] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:44.539 [2024-11-19 11:19:52.764445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:44.539 [2024-11-19 11:19:52.764452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.539 [2024-11-19 11:19:52.764469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.539 [2024-11-19 11:19:52.764479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.539 [2024-11-19 11:19:52.764665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.539 [2024-11-19 11:19:52.764671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.539 [2024-11-19 11:19:52.764674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.539 [2024-11-19 11:19:52.764684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:44.539 [2024-11-19 11:19:52.764693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.539 [2024-11-19 11:19:52.764707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.539 [2024-11-19 11:19:52.764717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.539 [2024-11-19 11:19:52.764903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.539 [2024-11-19 11:19:52.764910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.539 [2024-11-19 11:19:52.764913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.764917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.539 [2024-11-19 11:19:52.764922] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:44.539 [2024-11-19 11:19:52.764927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:44.539 [2024-11-19 11:19:52.764934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:44.539 [2024-11-19 11:19:52.765043] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:44.539 [2024-11-19 11:19:52.765047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:44.539 [2024-11-19 11:19:52.765055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.539 [2024-11-19 11:19:52.765069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.539 [2024-11-19 11:19:52.765080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.539 [2024-11-19 11:19:52.765280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.539 [2024-11-19 11:19:52.765286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.539 [2024-11-19 11:19:52.765290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.539 [2024-11-19 11:19:52.765298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:44.539 [2024-11-19 11:19:52.765312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.539 [2024-11-19 11:19:52.765327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.539 [2024-11-19 11:19:52.765337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.539 [2024-11-19 11:19:52.765508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.539 [2024-11-19 11:19:52.765514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.539 [2024-11-19 11:19:52.765518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.539 [2024-11-19 11:19:52.765526] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:44.539 [2024-11-19 11:19:52.765532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:44.539 [2024-11-19 11:19:52.765539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:44.539 [2024-11-19 11:19:52.765552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:44.539 [2024-11-19 11:19:52.765561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.539 [2024-11-19 11:19:52.765572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.539 [2024-11-19 11:19:52.765582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.539 [2024-11-19 11:19:52.765785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.539 [2024-11-19 11:19:52.765792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.539 [2024-11-19 11:19:52.765796] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765800] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111f550): datao=0, datal=4096, cccid=0 00:25:44.539 [2024-11-19 11:19:52.765805] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1181100) on tqpair(0x111f550): expected_datao=0, payload_size=4096 00:25:44.539 [2024-11-19 11:19:52.765810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765817] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765821] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.539 [2024-11-19 11:19:52.765956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.539 [2024-11-19 11:19:52.765959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.765963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.539 [2024-11-19 11:19:52.765970] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:44.539 [2024-11-19 11:19:52.765975] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:44.539 [2024-11-19 11:19:52.765980] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:44.539 [2024-11-19 11:19:52.765987] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:44.539 [2024-11-19 11:19:52.765995] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:44.539 [2024-11-19 11:19:52.766000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:44.539 [2024-11-19 11:19:52.766010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:44.539 [2024-11-19 11:19:52.766017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.766021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.766024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.539 [2024-11-19 11:19:52.766031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.539 [2024-11-19 11:19:52.766042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.539 [2024-11-19 11:19:52.766218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.539 [2024-11-19 11:19:52.766224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.539 [2024-11-19 11:19:52.766228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.766232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.539 [2024-11-19 11:19:52.766239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.539 [2024-11-19 11:19:52.766243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.766252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.540 [2024-11-19 11:19:52.766259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.766272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.540 [2024-11-19 11:19:52.766278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.766291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.540 [2024-11-19 11:19:52.766297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.766310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.540 [2024-11-19 11:19:52.766315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:44.540 [2024-11-19 11:19:52.766323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:44.540 [2024-11-19 11:19:52.766329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.766340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.540 [2024-11-19 11:19:52.766353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181100, cid 0, qid 0 00:25:44.540 [2024-11-19 11:19:52.766359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181280, cid 1, qid 0 00:25:44.540 [2024-11-19 11:19:52.766363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181400, cid 2, qid 0 00:25:44.540 [2024-11-19 11:19:52.766368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181580, cid 3, qid 0 00:25:44.540 [2024-11-19 11:19:52.766373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181700, cid 4, qid 0 00:25:44.540 [2024-11-19 11:19:52.766607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.540 [2024-11-19 11:19:52.766613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.540 [2024-11-19 11:19:52.766617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181700) on tqpair=0x111f550 00:25:44.540 [2024-11-19 11:19:52.766628] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:44.540 [2024-11-19 11:19:52.766633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:44.540 [2024-11-19 11:19:52.766643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.766653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.540 [2024-11-19 11:19:52.766663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181700, cid 4, qid 0 00:25:44.540 [2024-11-19 11:19:52.766890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.540 [2024-11-19 11:19:52.766897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.540 [2024-11-19 11:19:52.766900] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766904] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111f550): datao=0, datal=4096, cccid=4 00:25:44.540 [2024-11-19 11:19:52.766909] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1181700) on tqpair(0x111f550): expected_datao=0, payload_size=4096 00:25:44.540 [2024-11-19 11:19:52.766913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766924] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.766928] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.540 [2024-11-19 11:19:52.808051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.540 [2024-11-19 11:19:52.808055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181700) on tqpair=0x111f550 00:25:44.540 [2024-11-19 11:19:52.808073] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:44.540 [2024-11-19 11:19:52.808096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.808108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.540 [2024-11-19 11:19:52.808116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.808131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.540 [2024-11-19 11:19:52.808147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181700, cid 4, qid 0 00:25:44.540 [2024-11-19 11:19:52.808152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181880, cid 5, qid 0 00:25:44.540 [2024-11-19 11:19:52.808376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.540 [2024-11-19 11:19:52.808382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.540 [2024-11-19 11:19:52.808386] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808390] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111f550): datao=0, datal=1024, cccid=4 00:25:44.540 [2024-11-19 11:19:52.808394] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1181700) on tqpair(0x111f550): expected_datao=0, payload_size=1024 00:25:44.540 [2024-11-19 11:19:52.808398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808405] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808409] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.540 [2024-11-19 11:19:52.808420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.540 [2024-11-19 11:19:52.808424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.808428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181880) on tqpair=0x111f550 00:25:44.540 [2024-11-19 11:19:52.850057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.540 [2024-11-19 11:19:52.850067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.540 [2024-11-19 11:19:52.850071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181700) on tqpair=0x111f550 00:25:44.540 [2024-11-19 11:19:52.850087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.850097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.540 [2024-11-19 11:19:52.850112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181700, cid 4, qid 0 00:25:44.540 [2024-11-19 11:19:52.850390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.540 [2024-11-19 11:19:52.850396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.540 [2024-11-19 11:19:52.850400] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850404] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111f550): datao=0, datal=3072, cccid=4 00:25:44.540 [2024-11-19 11:19:52.850408] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1181700) on tqpair(0x111f550): expected_datao=0, payload_size=3072 00:25:44.540 [2024-11-19 11:19:52.850413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850419] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850423] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.540 [2024-11-19 11:19:52.850553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.540 [2024-11-19 11:19:52.850557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181700) on tqpair=0x111f550 00:25:44.540 [2024-11-19 11:19:52.850569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x111f550) 00:25:44.540 [2024-11-19 11:19:52.850582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.540 [2024-11-19 11:19:52.850595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181700, cid 4, qid 0 00:25:44.540 [2024-11-19 11:19:52.850842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.540 [2024-11-19 11:19:52.850848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.540 [2024-11-19 11:19:52.850851] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850855] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x111f550): datao=0, datal=8, cccid=4 00:25:44.540 [2024-11-19 11:19:52.850859] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1181700) on tqpair(0x111f550): expected_datao=0, payload_size=8 00:25:44.540 [2024-11-19 11:19:52.850868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850875] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.540 [2024-11-19 11:19:52.850879] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.803 [2024-11-19 11:19:52.891046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.803 [2024-11-19 11:19:52.891056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.803 [2024-11-19 11:19:52.891059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.803 [2024-11-19 11:19:52.891064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181700) on tqpair=0x111f550 00:25:44.803 ===================================================== 00:25:44.803 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:44.803 ===================================================== 00:25:44.803 Controller Capabilities/Features 00:25:44.803 ================================ 00:25:44.803 Vendor ID: 0000 00:25:44.803 Subsystem Vendor ID: 0000 00:25:44.803 Serial Number: .................... 00:25:44.803 Model Number: ........................................ 00:25:44.803 Firmware Version: 25.01 00:25:44.803 Recommended Arb Burst: 0 00:25:44.803 IEEE OUI Identifier: 00 00 00 00:25:44.803 Multi-path I/O 00:25:44.803 May have multiple subsystem ports: No 00:25:44.803 May have multiple controllers: No 00:25:44.803 Associated with SR-IOV VF: No 00:25:44.803 Max Data Transfer Size: 131072 00:25:44.803 Max Number of Namespaces: 0 00:25:44.803 Max Number of I/O Queues: 1024 00:25:44.803 NVMe Specification Version (VS): 1.3 00:25:44.803 NVMe Specification Version (Identify): 1.3 00:25:44.803 Maximum Queue Entries: 128 00:25:44.803 Contiguous Queues Required: Yes 00:25:44.803 Arbitration Mechanisms Supported 00:25:44.803 Weighted Round Robin: Not Supported 00:25:44.803 Vendor Specific: Not Supported 00:25:44.803 Reset Timeout: 15000 ms 00:25:44.803 Doorbell Stride: 4 bytes 00:25:44.803 NVM Subsystem Reset: Not Supported 00:25:44.803 Command Sets Supported 00:25:44.803 NVM Command Set: Supported 00:25:44.803 Boot Partition: Not Supported 00:25:44.803 Memory Page Size Minimum: 4096 bytes 00:25:44.804 Memory Page Size Maximum: 4096 bytes 00:25:44.804 Persistent Memory Region: Not Supported 00:25:44.804 Optional Asynchronous Events Supported 00:25:44.804 Namespace Attribute Notices: Not Supported 00:25:44.804 Firmware Activation Notices: Not Supported 00:25:44.804 ANA Change Notices: Not Supported 00:25:44.804 PLE Aggregate Log Change Notices: Not Supported 00:25:44.804 LBA Status Info Alert Notices: Not Supported 00:25:44.804 EGE Aggregate Log Change Notices: Not Supported 00:25:44.804 Normal NVM Subsystem Shutdown event: Not Supported 00:25:44.804 Zone Descriptor Change Notices: Not Supported 00:25:44.804 Discovery Log Change Notices: Supported 00:25:44.804 Controller Attributes 00:25:44.804 128-bit Host Identifier: Not Supported 00:25:44.804 Non-Operational Permissive Mode: Not Supported 00:25:44.804 NVM Sets: Not Supported 00:25:44.804 Read Recovery Levels: Not Supported 00:25:44.804 Endurance Groups: Not Supported 00:25:44.804 Predictable Latency Mode: Not Supported 00:25:44.804 Traffic Based Keep ALive: Not Supported 00:25:44.804 Namespace Granularity: Not Supported 00:25:44.804 SQ Associations: Not Supported 00:25:44.804 UUID List: Not Supported 00:25:44.804 Multi-Domain Subsystem: Not Supported 00:25:44.804 Fixed Capacity Management: Not Supported 00:25:44.804 Variable Capacity Management: Not Supported 00:25:44.804 Delete Endurance Group: Not Supported 00:25:44.804 Delete NVM Set: Not Supported 00:25:44.804 Extended LBA Formats Supported: Not Supported 00:25:44.804 Flexible Data Placement Supported: Not Supported 00:25:44.804 00:25:44.804 Controller Memory Buffer Support 00:25:44.804 ================================ 00:25:44.804 Supported: No 00:25:44.804 00:25:44.804 Persistent Memory Region Support 00:25:44.804 ================================ 00:25:44.804 Supported: No 00:25:44.804 00:25:44.804 Admin Command Set Attributes 00:25:44.804 ============================ 00:25:44.804 Security Send/Receive: Not Supported 00:25:44.804 Format NVM: Not Supported 00:25:44.804 Firmware Activate/Download: Not Supported 00:25:44.804 Namespace Management: Not Supported 00:25:44.804 Device Self-Test: Not Supported 00:25:44.804 Directives: Not Supported 00:25:44.804 NVMe-MI: Not Supported 00:25:44.804 Virtualization Management: Not Supported 00:25:44.804 Doorbell Buffer Config: Not Supported 00:25:44.804 Get LBA Status Capability: Not Supported 00:25:44.804 Command & Feature Lockdown Capability: Not Supported 00:25:44.804 Abort Command Limit: 1 00:25:44.804 Async Event Request Limit: 4 00:25:44.804 Number of Firmware Slots: N/A 00:25:44.804 Firmware Slot 1 Read-Only: N/A 00:25:44.804 Firmware Activation Without Reset: N/A 00:25:44.804 Multiple Update Detection Support: N/A 00:25:44.804 Firmware Update Granularity: No Information Provided 00:25:44.804 Per-Namespace SMART Log: No 00:25:44.804 Asymmetric Namespace Access Log Page: Not Supported 00:25:44.804 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:44.804 Command Effects Log Page: Not Supported 00:25:44.804 Get Log Page Extended Data: Supported 00:25:44.804 Telemetry Log Pages: Not Supported 00:25:44.804 Persistent Event Log Pages: Not Supported 00:25:44.804 Supported Log Pages Log Page: May Support 00:25:44.804 Commands Supported & Effects Log Page: Not Supported 00:25:44.804 Feature Identifiers & Effects Log Page:May Support 00:25:44.804 NVMe-MI Commands & Effects Log Page: May Support 00:25:44.804 Data Area 4 for Telemetry Log: Not Supported 00:25:44.804 Error Log Page Entries Supported: 128 00:25:44.804 Keep Alive: Not Supported 00:25:44.804 00:25:44.804 NVM Command Set Attributes 00:25:44.804 ========================== 00:25:44.804 Submission Queue Entry Size 00:25:44.804 Max: 1 00:25:44.804 Min: 1 00:25:44.804 Completion Queue Entry Size 00:25:44.804 Max: 1 00:25:44.804 Min: 1 00:25:44.804 Number of Namespaces: 0 00:25:44.804 Compare Command: Not Supported 00:25:44.804 Write Uncorrectable Command: Not Supported 00:25:44.804 Dataset Management Command: Not Supported 00:25:44.804 Write Zeroes Command: Not Supported 00:25:44.804 Set Features Save Field: Not Supported 00:25:44.804 Reservations: Not Supported 00:25:44.804 Timestamp: Not Supported 00:25:44.804 Copy: Not Supported 00:25:44.804 Volatile Write Cache: Not Present 00:25:44.804 Atomic Write Unit (Normal): 1 00:25:44.804 Atomic Write Unit (PFail): 1 00:25:44.804 Atomic Compare & Write Unit: 1 00:25:44.804 Fused Compare & Write: Supported 00:25:44.804 Scatter-Gather List 00:25:44.804 SGL Command Set: Supported 00:25:44.804 SGL Keyed: Supported 00:25:44.804 SGL Bit Bucket Descriptor: Not Supported 00:25:44.804 SGL Metadata Pointer: Not Supported 00:25:44.804 Oversized SGL: Not Supported 00:25:44.804 SGL Metadata Address: Not Supported 00:25:44.804 SGL Offset: Supported 00:25:44.804 Transport SGL Data Block: Not Supported 00:25:44.804 Replay Protected Memory Block: Not Supported 00:25:44.804 00:25:44.804 Firmware Slot Information 00:25:44.804 ========================= 00:25:44.804 Active slot: 0 00:25:44.804 00:25:44.804 00:25:44.804 Error Log 00:25:44.804 ========= 00:25:44.804 00:25:44.804 Active Namespaces 00:25:44.804 ================= 00:25:44.804 Discovery Log Page 00:25:44.804 ================== 00:25:44.804 Generation Counter: 2 00:25:44.804 Number of Records: 2 00:25:44.804 Record Format: 0 00:25:44.804 00:25:44.804 Discovery Log Entry 0 00:25:44.804 ---------------------- 00:25:44.804 Transport Type: 3 (TCP) 00:25:44.804 Address Family: 1 (IPv4) 00:25:44.804 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:44.804 Entry Flags: 00:25:44.804 Duplicate Returned Information: 1 00:25:44.804 Explicit Persistent Connection Support for Discovery: 1 00:25:44.804 Transport Requirements: 00:25:44.804 Secure Channel: Not Required 00:25:44.804 Port ID: 0 (0x0000) 00:25:44.804 Controller ID: 65535 (0xffff) 00:25:44.804 Admin Max SQ Size: 128 00:25:44.804 Transport Service Identifier: 4420 00:25:44.804 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:44.804 Transport Address: 10.0.0.2 00:25:44.804 Discovery Log Entry 1 00:25:44.804 ---------------------- 00:25:44.804 Transport Type: 3 (TCP) 00:25:44.804 Address Family: 1 (IPv4) 00:25:44.804 Subsystem Type: 2 (NVM Subsystem) 00:25:44.804 Entry Flags: 00:25:44.804 Duplicate Returned Information: 0 00:25:44.804 Explicit Persistent Connection Support for Discovery: 0 00:25:44.804 Transport Requirements: 00:25:44.804 Secure Channel: Not Required 00:25:44.804 Port ID: 0 (0x0000) 00:25:44.804 Controller ID: 65535 (0xffff) 00:25:44.804 Admin Max SQ Size: 128 00:25:44.804 Transport Service Identifier: 4420 00:25:44.804 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:44.804 Transport Address: 10.0.0.2 [2024-11-19 11:19:52.891153] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:44.804 [2024-11-19 11:19:52.891164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181100) on tqpair=0x111f550 00:25:44.804 [2024-11-19 11:19:52.891170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.804 [2024-11-19 11:19:52.891176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181280) on tqpair=0x111f550 00:25:44.804 [2024-11-19 11:19:52.891181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.804 [2024-11-19 11:19:52.891186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181400) on tqpair=0x111f550 00:25:44.804 [2024-11-19 11:19:52.891190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.804 [2024-11-19 11:19:52.891195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181580) on tqpair=0x111f550 00:25:44.804 [2024-11-19 11:19:52.891200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.804 [2024-11-19 11:19:52.891211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.804 [2024-11-19 11:19:52.891215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.804 [2024-11-19 11:19:52.891218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111f550) 00:25:44.804 [2024-11-19 11:19:52.891225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.804 [2024-11-19 11:19:52.891238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181580, cid 3, qid 0 00:25:44.804 [2024-11-19 11:19:52.891356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.804 [2024-11-19 11:19:52.891362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.804 [2024-11-19 11:19:52.891366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181580) on tqpair=0x111f550 00:25:44.805 [2024-11-19 11:19:52.891376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111f550) 00:25:44.805 [2024-11-19 11:19:52.891393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.805 [2024-11-19 11:19:52.891406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181580, cid 3, qid 0 00:25:44.805 [2024-11-19 11:19:52.891606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.805 [2024-11-19 11:19:52.891612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.805 [2024-11-19 11:19:52.891616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181580) on tqpair=0x111f550 00:25:44.805 [2024-11-19 11:19:52.891624] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:44.805 [2024-11-19 11:19:52.891629] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:44.805 [2024-11-19 11:19:52.891638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111f550) 00:25:44.805 [2024-11-19 11:19:52.891652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.805 [2024-11-19 11:19:52.891662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181580, cid 3, qid 0 00:25:44.805 [2024-11-19 11:19:52.891858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.805 [2024-11-19 11:19:52.891870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.805 [2024-11-19 11:19:52.891873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181580) on tqpair=0x111f550 00:25:44.805 [2024-11-19 11:19:52.891887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.891895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111f550) 00:25:44.805 [2024-11-19 11:19:52.891901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.805 [2024-11-19 11:19:52.891911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181580, cid 3, qid 0 00:25:44.805 [2024-11-19 11:19:52.892161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.805 [2024-11-19 11:19:52.892168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.805 [2024-11-19 11:19:52.892171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181580) on tqpair=0x111f550 00:25:44.805 [2024-11-19 11:19:52.892184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111f550) 00:25:44.805 [2024-11-19 11:19:52.892198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.805 [2024-11-19 11:19:52.892208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181580, cid 3, qid 0 00:25:44.805 [2024-11-19 11:19:52.892412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.805 [2024-11-19 11:19:52.892419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.805 [2024-11-19 11:19:52.892422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181580) on tqpair=0x111f550 00:25:44.805 [2024-11-19 11:19:52.892437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111f550) 00:25:44.805 [2024-11-19 11:19:52.892452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.805 [2024-11-19 11:19:52.892461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181580, cid 3, qid 0 00:25:44.805 [2024-11-19 11:19:52.892653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.805 [2024-11-19 11:19:52.892660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.805 [2024-11-19 11:19:52.892663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181580) on tqpair=0x111f550 00:25:44.805 [2024-11-19 11:19:52.892676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.892684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x111f550) 00:25:44.805 [2024-11-19 11:19:52.892690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.805 [2024-11-19 11:19:52.892700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181580, cid 3, qid 0 00:25:44.805 [2024-11-19 11:19:52.899872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.805 [2024-11-19 11:19:52.899881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.805 [2024-11-19 11:19:52.899885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:52.899889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181580) on tqpair=0x111f550 00:25:44.805 [2024-11-19 11:19:52.899897] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 8 milliseconds 00:25:44.805 00:25:44.805 11:19:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:44.805 [2024-11-19 11:19:52.945130] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:25:44.805 [2024-11-19 11:19:52.945199] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44104 ] 00:25:44.805 [2024-11-19 11:19:52.997764] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:44.805 [2024-11-19 11:19:52.997811] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:44.805 [2024-11-19 11:19:52.997816] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:44.805 [2024-11-19 11:19:52.997827] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:44.805 [2024-11-19 11:19:52.997836] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:44.805 [2024-11-19 11:19:53.002073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:44.805 [2024-11-19 11:19:53.002105] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x249e550 0 00:25:44.805 [2024-11-19 11:19:53.002303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:44.805 [2024-11-19 11:19:53.002311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:44.805 [2024-11-19 11:19:53.002319] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:44.805 [2024-11-19 11:19:53.002322] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:44.805 [2024-11-19 11:19:53.002347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:53.002353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:53.002357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.805 [2024-11-19 11:19:53.002369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:44.805 [2024-11-19 11:19:53.002382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.805 [2024-11-19 11:19:53.009871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.805 [2024-11-19 11:19:53.009881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.805 [2024-11-19 11:19:53.009884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:53.009889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.805 [2024-11-19 11:19:53.009900] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:44.805 [2024-11-19 11:19:53.009907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:44.805 [2024-11-19 11:19:53.009912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:44.805 [2024-11-19 11:19:53.009925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:53.009929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:53.009932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.805 [2024-11-19 11:19:53.009940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.805 [2024-11-19 11:19:53.009953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.805 [2024-11-19 11:19:53.010137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.805 [2024-11-19 11:19:53.010144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.805 [2024-11-19 11:19:53.010148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:53.010151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.805 [2024-11-19 11:19:53.010156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:44.805 [2024-11-19 11:19:53.010164] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:44.805 [2024-11-19 11:19:53.010171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:53.010174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.805 [2024-11-19 11:19:53.010178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.806 [2024-11-19 11:19:53.010185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.806 [2024-11-19 11:19:53.010195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.806 [2024-11-19 11:19:53.010404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.806 [2024-11-19 11:19:53.010411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.806 [2024-11-19 11:19:53.010414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.010418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.806 [2024-11-19 11:19:53.010423] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:44.806 [2024-11-19 11:19:53.010434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:44.806 [2024-11-19 11:19:53.010441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.010445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.010449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.806 [2024-11-19 11:19:53.010455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.806 [2024-11-19 11:19:53.010466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.806 [2024-11-19 11:19:53.010667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.806 [2024-11-19 11:19:53.010673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.806 [2024-11-19 11:19:53.010677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.010681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.806 [2024-11-19 11:19:53.010686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:44.806 [2024-11-19 11:19:53.010695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.010699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.010703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.806 [2024-11-19 11:19:53.010710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.806 [2024-11-19 11:19:53.010720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.806 [2024-11-19 11:19:53.010924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.806 [2024-11-19 11:19:53.010931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.806 [2024-11-19 11:19:53.010935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.010939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.806 [2024-11-19 11:19:53.010944] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:44.806 [2024-11-19 11:19:53.010949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:44.806 [2024-11-19 11:19:53.010956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:44.806 [2024-11-19 11:19:53.011064] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:44.806 [2024-11-19 11:19:53.011069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:44.806 [2024-11-19 11:19:53.011077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.806 [2024-11-19 11:19:53.011091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.806 [2024-11-19 11:19:53.011101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.806 [2024-11-19 11:19:53.011284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.806 [2024-11-19 11:19:53.011291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.806 [2024-11-19 11:19:53.011294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.806 [2024-11-19 11:19:53.011305] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:44.806 [2024-11-19 11:19:53.011314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.806 [2024-11-19 11:19:53.011329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.806 [2024-11-19 11:19:53.011339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.806 [2024-11-19 11:19:53.011531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.806 [2024-11-19 11:19:53.011538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.806 [2024-11-19 11:19:53.011541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.806 [2024-11-19 11:19:53.011549] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:44.806 [2024-11-19 11:19:53.011554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:44.806 [2024-11-19 11:19:53.011562] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:44.806 [2024-11-19 11:19:53.011571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:44.806 [2024-11-19 11:19:53.011580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.806 [2024-11-19 11:19:53.011590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.806 [2024-11-19 11:19:53.011601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.806 [2024-11-19 11:19:53.011784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.806 [2024-11-19 11:19:53.011791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.806 [2024-11-19 11:19:53.011795] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011799] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x249e550): datao=0, datal=4096, cccid=0 00:25:44.806 [2024-11-19 11:19:53.011804] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2500100) on tqpair(0x249e550): expected_datao=0, payload_size=4096 00:25:44.806 [2024-11-19 11:19:53.011808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011824] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.011829] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.054879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.806 [2024-11-19 11:19:53.054889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.806 [2024-11-19 11:19:53.054893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.054897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.806 [2024-11-19 11:19:53.054905] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:44.806 [2024-11-19 11:19:53.054910] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:44.806 [2024-11-19 11:19:53.054914] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:44.806 [2024-11-19 11:19:53.054924] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:44.806 [2024-11-19 11:19:53.054928] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:44.806 [2024-11-19 11:19:53.054933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:44.806 [2024-11-19 11:19:53.054944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:44.806 [2024-11-19 11:19:53.054950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.054954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.054958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.806 [2024-11-19 11:19:53.054965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.806 [2024-11-19 11:19:53.054978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.806 [2024-11-19 11:19:53.055137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.806 [2024-11-19 11:19:53.055143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.806 [2024-11-19 11:19:53.055147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.055151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:44.806 [2024-11-19 11:19:53.055157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.055161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.806 [2024-11-19 11:19:53.055165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.055171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.807 [2024-11-19 11:19:53.055177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.055191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.807 [2024-11-19 11:19:53.055197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.055210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.807 [2024-11-19 11:19:53.055216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.055229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.807 [2024-11-19 11:19:53.055233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.055241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.055248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.055260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.807 [2024-11-19 11:19:53.055272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500100, cid 0, qid 0 00:25:44.807 [2024-11-19 11:19:53.055277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500280, cid 1, qid 0 00:25:44.807 [2024-11-19 11:19:53.055282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500400, cid 2, qid 0 00:25:44.807 [2024-11-19 11:19:53.055287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:44.807 [2024-11-19 11:19:53.055292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500700, cid 4, qid 0 00:25:44.807 [2024-11-19 11:19:53.055518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.807 [2024-11-19 11:19:53.055525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.807 [2024-11-19 11:19:53.055528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500700) on tqpair=0x249e550 00:25:44.807 [2024-11-19 11:19:53.055539] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:44.807 [2024-11-19 11:19:53.055544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.055552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.055558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.055565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.055579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.807 [2024-11-19 11:19:53.055589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500700, cid 4, qid 0 00:25:44.807 [2024-11-19 11:19:53.055755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.807 [2024-11-19 11:19:53.055762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.807 [2024-11-19 11:19:53.055765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500700) on tqpair=0x249e550 00:25:44.807 [2024-11-19 11:19:53.055834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.055843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.055851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.055854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.055865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.807 [2024-11-19 11:19:53.055877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500700, cid 4, qid 0 00:25:44.807 [2024-11-19 11:19:53.056068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.807 [2024-11-19 11:19:53.056074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.807 [2024-11-19 11:19:53.056078] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.056084] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x249e550): datao=0, datal=4096, cccid=4 00:25:44.807 [2024-11-19 11:19:53.056089] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2500700) on tqpair(0x249e550): expected_datao=0, payload_size=4096 00:25:44.807 [2024-11-19 11:19:53.056093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.056108] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.056112] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.099869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.807 [2024-11-19 11:19:53.099881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.807 [2024-11-19 11:19:53.099885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.099889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500700) on tqpair=0x249e550 00:25:44.807 [2024-11-19 11:19:53.099899] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:44.807 [2024-11-19 11:19:53.099909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.099918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.099926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.099930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.099937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.807 [2024-11-19 11:19:53.099950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500700, cid 4, qid 0 00:25:44.807 [2024-11-19 11:19:53.100116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.807 [2024-11-19 11:19:53.100123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.807 [2024-11-19 11:19:53.100126] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.100130] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x249e550): datao=0, datal=4096, cccid=4 00:25:44.807 [2024-11-19 11:19:53.100135] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2500700) on tqpair(0x249e550): expected_datao=0, payload_size=4096 00:25:44.807 [2024-11-19 11:19:53.100139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.100154] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.100158] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.141933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.807 [2024-11-19 11:19:53.141943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.807 [2024-11-19 11:19:53.141946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.141950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500700) on tqpair=0x249e550 00:25:44.807 [2024-11-19 11:19:53.141963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.141973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:44.807 [2024-11-19 11:19:53.141980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.141984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x249e550) 00:25:44.807 [2024-11-19 11:19:53.141991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.807 [2024-11-19 11:19:53.142003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500700, cid 4, qid 0 00:25:44.807 [2024-11-19 11:19:53.142071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.807 [2024-11-19 11:19:53.142078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.807 [2024-11-19 11:19:53.142082] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.807 [2024-11-19 11:19:53.142085] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x249e550): datao=0, datal=4096, cccid=4 00:25:44.807 [2024-11-19 11:19:53.142090] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2500700) on tqpair(0x249e550): expected_datao=0, payload_size=4096 00:25:44.808 [2024-11-19 11:19:53.142094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.808 [2024-11-19 11:19:53.142109] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.808 [2024-11-19 11:19:53.142113] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.070 [2024-11-19 11:19:53.182932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.070 [2024-11-19 11:19:53.182942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.070 [2024-11-19 11:19:53.182946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.070 [2024-11-19 11:19:53.182950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500700) on tqpair=0x249e550 00:25:45.070 [2024-11-19 11:19:53.182957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:45.070 [2024-11-19 11:19:53.182965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:45.070 [2024-11-19 11:19:53.182975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:45.070 [2024-11-19 11:19:53.182981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:45.070 [2024-11-19 11:19:53.182986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:45.070 [2024-11-19 11:19:53.182991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:45.070 [2024-11-19 11:19:53.182996] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:45.070 [2024-11-19 11:19:53.183001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:45.070 [2024-11-19 11:19:53.183006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:45.070 [2024-11-19 11:19:53.183020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.070 [2024-11-19 11:19:53.183024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x249e550) 00:25:45.070 [2024-11-19 11:19:53.183031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.070 [2024-11-19 11:19:53.183038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.070 [2024-11-19 11:19:53.183042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.070 [2024-11-19 11:19:53.183045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x249e550) 00:25:45.070 [2024-11-19 11:19:53.183052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.070 [2024-11-19 11:19:53.183066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500700, cid 4, qid 0 00:25:45.070 [2024-11-19 11:19:53.183071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500880, cid 5, qid 0 00:25:45.070 [2024-11-19 11:19:53.183257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.070 [2024-11-19 11:19:53.183263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.070 [2024-11-19 11:19:53.183269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.070 [2024-11-19 11:19:53.183273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500700) on tqpair=0x249e550 00:25:45.070 [2024-11-19 11:19:53.183280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.070 [2024-11-19 11:19:53.183286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.070 [2024-11-19 11:19:53.183289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.070 [2024-11-19 11:19:53.183293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500880) on tqpair=0x249e550 00:25:45.070 [2024-11-19 11:19:53.183302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.070 [2024-11-19 11:19:53.183306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x249e550) 00:25:45.070 [2024-11-19 11:19:53.183312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.071 [2024-11-19 11:19:53.183323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500880, cid 5, qid 0 00:25:45.071 [2024-11-19 11:19:53.183492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.071 [2024-11-19 11:19:53.183498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.071 [2024-11-19 11:19:53.183502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.183506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500880) on tqpair=0x249e550 00:25:45.071 [2024-11-19 11:19:53.183515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.183519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x249e550) 00:25:45.071 [2024-11-19 11:19:53.183526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.071 [2024-11-19 11:19:53.183535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500880, cid 5, qid 0 00:25:45.071 [2024-11-19 11:19:53.183712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.071 [2024-11-19 11:19:53.183719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.071 [2024-11-19 11:19:53.183722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.183726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500880) on tqpair=0x249e550 00:25:45.071 [2024-11-19 11:19:53.183735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.183739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x249e550) 00:25:45.071 [2024-11-19 11:19:53.183745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.071 [2024-11-19 11:19:53.183755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500880, cid 5, qid 0 00:25:45.071 [2024-11-19 11:19:53.183993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.071 [2024-11-19 11:19:53.184000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.071 [2024-11-19 11:19:53.184004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500880) on tqpair=0x249e550 00:25:45.071 [2024-11-19 11:19:53.184021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x249e550) 00:25:45.071 [2024-11-19 11:19:53.184032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.071 [2024-11-19 11:19:53.184039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x249e550) 00:25:45.071 [2024-11-19 11:19:53.184054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.071 [2024-11-19 11:19:53.184061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x249e550) 00:25:45.071 [2024-11-19 11:19:53.184071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.071 [2024-11-19 11:19:53.184079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x249e550) 00:25:45.071 [2024-11-19 11:19:53.184088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.071 [2024-11-19 11:19:53.184100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500880, cid 5, qid 0 00:25:45.071 [2024-11-19 11:19:53.184105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500700, cid 4, qid 0 00:25:45.071 [2024-11-19 11:19:53.184110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500a00, cid 6, qid 0 00:25:45.071 [2024-11-19 11:19:53.184115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500b80, cid 7, qid 0 00:25:45.071 [2024-11-19 11:19:53.184321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.071 [2024-11-19 11:19:53.184327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.071 [2024-11-19 11:19:53.184331] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184334] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x249e550): datao=0, datal=8192, cccid=5 00:25:45.071 [2024-11-19 11:19:53.184340] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2500880) on tqpair(0x249e550): expected_datao=0, payload_size=8192 00:25:45.071 [2024-11-19 11:19:53.184344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184418] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184423] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.071 [2024-11-19 11:19:53.184434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.071 [2024-11-19 11:19:53.184438] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184442] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x249e550): datao=0, datal=512, cccid=4 00:25:45.071 [2024-11-19 11:19:53.184446] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2500700) on tqpair(0x249e550): expected_datao=0, payload_size=512 00:25:45.071 [2024-11-19 11:19:53.184450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184457] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184460] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.071 [2024-11-19 11:19:53.184472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.071 [2024-11-19 11:19:53.184475] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184479] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x249e550): datao=0, datal=512, cccid=6 00:25:45.071 [2024-11-19 11:19:53.184483] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2500a00) on tqpair(0x249e550): expected_datao=0, payload_size=512 00:25:45.071 [2024-11-19 11:19:53.184487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184494] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184497] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.071 [2024-11-19 11:19:53.184511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.071 [2024-11-19 11:19:53.184514] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184518] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x249e550): datao=0, datal=4096, cccid=7 00:25:45.071 [2024-11-19 11:19:53.184522] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2500b80) on tqpair(0x249e550): expected_datao=0, payload_size=4096 00:25:45.071 [2024-11-19 11:19:53.184526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184533] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184536] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.071 [2024-11-19 11:19:53.184557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.071 [2024-11-19 11:19:53.184561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500880) on tqpair=0x249e550 00:25:45.071 [2024-11-19 11:19:53.184577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.071 [2024-11-19 11:19:53.184582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.071 [2024-11-19 11:19:53.184586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500700) on tqpair=0x249e550 00:25:45.071 [2024-11-19 11:19:53.184600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.071 [2024-11-19 11:19:53.184606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.071 [2024-11-19 11:19:53.184610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500a00) on tqpair=0x249e550 00:25:45.071 [2024-11-19 11:19:53.184621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.071 [2024-11-19 11:19:53.184626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.071 [2024-11-19 11:19:53.184630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.071 [2024-11-19 11:19:53.184634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500b80) on tqpair=0x249e550 00:25:45.071 ===================================================== 00:25:45.071 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:45.071 ===================================================== 00:25:45.071 Controller Capabilities/Features 00:25:45.071 ================================ 00:25:45.071 Vendor ID: 8086 00:25:45.071 Subsystem Vendor ID: 8086 00:25:45.071 Serial Number: SPDK00000000000001 00:25:45.071 Model Number: SPDK bdev Controller 00:25:45.071 Firmware Version: 25.01 00:25:45.071 Recommended Arb Burst: 6 00:25:45.071 IEEE OUI Identifier: e4 d2 5c 00:25:45.071 Multi-path I/O 00:25:45.071 May have multiple subsystem ports: Yes 00:25:45.071 May have multiple controllers: Yes 00:25:45.071 Associated with SR-IOV VF: No 00:25:45.071 Max Data Transfer Size: 131072 00:25:45.071 Max Number of Namespaces: 32 00:25:45.071 Max Number of I/O Queues: 127 00:25:45.071 NVMe Specification Version (VS): 1.3 00:25:45.071 NVMe Specification Version (Identify): 1.3 00:25:45.071 Maximum Queue Entries: 128 00:25:45.071 Contiguous Queues Required: Yes 00:25:45.071 Arbitration Mechanisms Supported 00:25:45.071 Weighted Round Robin: Not Supported 00:25:45.071 Vendor Specific: Not Supported 00:25:45.071 Reset Timeout: 15000 ms 00:25:45.071 Doorbell Stride: 4 bytes 00:25:45.071 NVM Subsystem Reset: Not Supported 00:25:45.071 Command Sets Supported 00:25:45.071 NVM Command Set: Supported 00:25:45.071 Boot Partition: Not Supported 00:25:45.071 Memory Page Size Minimum: 4096 bytes 00:25:45.071 Memory Page Size Maximum: 4096 bytes 00:25:45.071 Persistent Memory Region: Not Supported 00:25:45.071 Optional Asynchronous Events Supported 00:25:45.072 Namespace Attribute Notices: Supported 00:25:45.072 Firmware Activation Notices: Not Supported 00:25:45.072 ANA Change Notices: Not Supported 00:25:45.072 PLE Aggregate Log Change Notices: Not Supported 00:25:45.072 LBA Status Info Alert Notices: Not Supported 00:25:45.072 EGE Aggregate Log Change Notices: Not Supported 00:25:45.072 Normal NVM Subsystem Shutdown event: Not Supported 00:25:45.072 Zone Descriptor Change Notices: Not Supported 00:25:45.072 Discovery Log Change Notices: Not Supported 00:25:45.072 Controller Attributes 00:25:45.072 128-bit Host Identifier: Supported 00:25:45.072 Non-Operational Permissive Mode: Not Supported 00:25:45.072 NVM Sets: Not Supported 00:25:45.072 Read Recovery Levels: Not Supported 00:25:45.072 Endurance Groups: Not Supported 00:25:45.072 Predictable Latency Mode: Not Supported 00:25:45.072 Traffic Based Keep ALive: Not Supported 00:25:45.072 Namespace Granularity: Not Supported 00:25:45.072 SQ Associations: Not Supported 00:25:45.072 UUID List: Not Supported 00:25:45.072 Multi-Domain Subsystem: Not Supported 00:25:45.072 Fixed Capacity Management: Not Supported 00:25:45.072 Variable Capacity Management: Not Supported 00:25:45.072 Delete Endurance Group: Not Supported 00:25:45.072 Delete NVM Set: Not Supported 00:25:45.072 Extended LBA Formats Supported: Not Supported 00:25:45.072 Flexible Data Placement Supported: Not Supported 00:25:45.072 00:25:45.072 Controller Memory Buffer Support 00:25:45.072 ================================ 00:25:45.072 Supported: No 00:25:45.072 00:25:45.072 Persistent Memory Region Support 00:25:45.072 ================================ 00:25:45.072 Supported: No 00:25:45.072 00:25:45.072 Admin Command Set Attributes 00:25:45.072 ============================ 00:25:45.072 Security Send/Receive: Not Supported 00:25:45.072 Format NVM: Not Supported 00:25:45.072 Firmware Activate/Download: Not Supported 00:25:45.072 Namespace Management: Not Supported 00:25:45.072 Device Self-Test: Not Supported 00:25:45.072 Directives: Not Supported 00:25:45.072 NVMe-MI: Not Supported 00:25:45.072 Virtualization Management: Not Supported 00:25:45.072 Doorbell Buffer Config: Not Supported 00:25:45.072 Get LBA Status Capability: Not Supported 00:25:45.072 Command & Feature Lockdown Capability: Not Supported 00:25:45.072 Abort Command Limit: 4 00:25:45.072 Async Event Request Limit: 4 00:25:45.072 Number of Firmware Slots: N/A 00:25:45.072 Firmware Slot 1 Read-Only: N/A 00:25:45.072 Firmware Activation Without Reset: N/A 00:25:45.072 Multiple Update Detection Support: N/A 00:25:45.072 Firmware Update Granularity: No Information Provided 00:25:45.072 Per-Namespace SMART Log: No 00:25:45.072 Asymmetric Namespace Access Log Page: Not Supported 00:25:45.072 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:45.072 Command Effects Log Page: Supported 00:25:45.072 Get Log Page Extended Data: Supported 00:25:45.072 Telemetry Log Pages: Not Supported 00:25:45.072 Persistent Event Log Pages: Not Supported 00:25:45.072 Supported Log Pages Log Page: May Support 00:25:45.072 Commands Supported & Effects Log Page: Not Supported 00:25:45.072 Feature Identifiers & Effects Log Page:May Support 00:25:45.072 NVMe-MI Commands & Effects Log Page: May Support 00:25:45.072 Data Area 4 for Telemetry Log: Not Supported 00:25:45.072 Error Log Page Entries Supported: 128 00:25:45.072 Keep Alive: Supported 00:25:45.072 Keep Alive Granularity: 10000 ms 00:25:45.072 00:25:45.072 NVM Command Set Attributes 00:25:45.072 ========================== 00:25:45.072 Submission Queue Entry Size 00:25:45.072 Max: 64 00:25:45.072 Min: 64 00:25:45.072 Completion Queue Entry Size 00:25:45.072 Max: 16 00:25:45.072 Min: 16 00:25:45.072 Number of Namespaces: 32 00:25:45.072 Compare Command: Supported 00:25:45.072 Write Uncorrectable Command: Not Supported 00:25:45.072 Dataset Management Command: Supported 00:25:45.072 Write Zeroes Command: Supported 00:25:45.072 Set Features Save Field: Not Supported 00:25:45.072 Reservations: Supported 00:25:45.072 Timestamp: Not Supported 00:25:45.072 Copy: Supported 00:25:45.072 Volatile Write Cache: Present 00:25:45.072 Atomic Write Unit (Normal): 1 00:25:45.072 Atomic Write Unit (PFail): 1 00:25:45.072 Atomic Compare & Write Unit: 1 00:25:45.072 Fused Compare & Write: Supported 00:25:45.072 Scatter-Gather List 00:25:45.072 SGL Command Set: Supported 00:25:45.072 SGL Keyed: Supported 00:25:45.072 SGL Bit Bucket Descriptor: Not Supported 00:25:45.072 SGL Metadata Pointer: Not Supported 00:25:45.072 Oversized SGL: Not Supported 00:25:45.072 SGL Metadata Address: Not Supported 00:25:45.072 SGL Offset: Supported 00:25:45.072 Transport SGL Data Block: Not Supported 00:25:45.072 Replay Protected Memory Block: Not Supported 00:25:45.072 00:25:45.072 Firmware Slot Information 00:25:45.072 ========================= 00:25:45.072 Active slot: 1 00:25:45.072 Slot 1 Firmware Revision: 25.01 00:25:45.072 00:25:45.072 00:25:45.072 Commands Supported and Effects 00:25:45.072 ============================== 00:25:45.072 Admin Commands 00:25:45.072 -------------- 00:25:45.072 Get Log Page (02h): Supported 00:25:45.072 Identify (06h): Supported 00:25:45.072 Abort (08h): Supported 00:25:45.072 Set Features (09h): Supported 00:25:45.072 Get Features (0Ah): Supported 00:25:45.072 Asynchronous Event Request (0Ch): Supported 00:25:45.072 Keep Alive (18h): Supported 00:25:45.072 I/O Commands 00:25:45.072 ------------ 00:25:45.072 Flush (00h): Supported LBA-Change 00:25:45.072 Write (01h): Supported LBA-Change 00:25:45.072 Read (02h): Supported 00:25:45.072 Compare (05h): Supported 00:25:45.072 Write Zeroes (08h): Supported LBA-Change 00:25:45.072 Dataset Management (09h): Supported LBA-Change 00:25:45.072 Copy (19h): Supported LBA-Change 00:25:45.072 00:25:45.072 Error Log 00:25:45.072 ========= 00:25:45.072 00:25:45.072 Arbitration 00:25:45.072 =========== 00:25:45.072 Arbitration Burst: 1 00:25:45.072 00:25:45.072 Power Management 00:25:45.072 ================ 00:25:45.072 Number of Power States: 1 00:25:45.072 Current Power State: Power State #0 00:25:45.072 Power State #0: 00:25:45.072 Max Power: 0.00 W 00:25:45.072 Non-Operational State: Operational 00:25:45.072 Entry Latency: Not Reported 00:25:45.072 Exit Latency: Not Reported 00:25:45.072 Relative Read Throughput: 0 00:25:45.072 Relative Read Latency: 0 00:25:45.072 Relative Write Throughput: 0 00:25:45.072 Relative Write Latency: 0 00:25:45.072 Idle Power: Not Reported 00:25:45.072 Active Power: Not Reported 00:25:45.072 Non-Operational Permissive Mode: Not Supported 00:25:45.072 00:25:45.072 Health Information 00:25:45.072 ================== 00:25:45.072 Critical Warnings: 00:25:45.072 Available Spare Space: OK 00:25:45.072 Temperature: OK 00:25:45.072 Device Reliability: OK 00:25:45.072 Read Only: No 00:25:45.072 Volatile Memory Backup: OK 00:25:45.072 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:45.072 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:45.072 Available Spare: 0% 00:25:45.072 Available Spare Threshold: 0% 00:25:45.072 Life Percentage Used:[2024-11-19 11:19:53.184729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.072 [2024-11-19 11:19:53.184734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x249e550) 00:25:45.072 [2024-11-19 11:19:53.184741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.072 [2024-11-19 11:19:53.184752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500b80, cid 7, qid 0 00:25:45.072 [2024-11-19 11:19:53.188873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.072 [2024-11-19 11:19:53.188881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.072 [2024-11-19 11:19:53.188885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.072 [2024-11-19 11:19:53.188889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500b80) on tqpair=0x249e550 00:25:45.072 [2024-11-19 11:19:53.188919] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:45.072 [2024-11-19 11:19:53.188928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500100) on tqpair=0x249e550 00:25:45.072 [2024-11-19 11:19:53.188934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.072 [2024-11-19 11:19:53.188940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500280) on tqpair=0x249e550 00:25:45.072 [2024-11-19 11:19:53.188944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.072 [2024-11-19 11:19:53.188953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500400) on tqpair=0x249e550 00:25:45.072 [2024-11-19 11:19:53.188958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.072 [2024-11-19 11:19:53.188963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.188967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.073 [2024-11-19 11:19:53.188975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.188979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.188983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.188990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.189002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.189154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.189160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.189164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.189174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.189189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.189202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.189380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.189386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.189390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.189398] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:45.073 [2024-11-19 11:19:53.189403] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:45.073 [2024-11-19 11:19:53.189412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.189427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.189437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.189599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.189606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.189609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.189622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.189638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.189649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.189872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.189879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.189882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.189896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.189903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.189910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.189920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.190136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.190142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.190146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.190159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.190173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.190183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.190395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.190401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.190405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.190418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.190433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.190442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.190672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.190679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.190683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.190696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.190712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.190722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.190940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.190947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.190951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.190964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.190971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.190978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.190988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.191210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.191216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.191220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.191233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.191247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.191257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.191419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.191425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.191429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.191443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.191457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.073 [2024-11-19 11:19:53.191466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.073 [2024-11-19 11:19:53.191666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.073 [2024-11-19 11:19:53.191672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.073 [2024-11-19 11:19:53.191676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.073 [2024-11-19 11:19:53.191689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.073 [2024-11-19 11:19:53.191697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.073 [2024-11-19 11:19:53.191703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.074 [2024-11-19 11:19:53.191715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.074 [2024-11-19 11:19:53.191942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.074 [2024-11-19 11:19:53.191949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.074 [2024-11-19 11:19:53.191953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.191956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.074 [2024-11-19 11:19:53.191966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.191970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.191973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.074 [2024-11-19 11:19:53.191980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.074 [2024-11-19 11:19:53.191990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.074 [2024-11-19 11:19:53.192221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.074 [2024-11-19 11:19:53.192228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.074 [2024-11-19 11:19:53.192231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.074 [2024-11-19 11:19:53.192244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.074 [2024-11-19 11:19:53.192259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.074 [2024-11-19 11:19:53.192268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.074 [2024-11-19 11:19:53.192492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.074 [2024-11-19 11:19:53.192499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.074 [2024-11-19 11:19:53.192502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.074 [2024-11-19 11:19:53.192515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.074 [2024-11-19 11:19:53.192530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.074 [2024-11-19 11:19:53.192539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.074 [2024-11-19 11:19:53.192767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.074 [2024-11-19 11:19:53.192774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.074 [2024-11-19 11:19:53.192777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.074 [2024-11-19 11:19:53.192790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.192798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x249e550) 00:25:45.074 [2024-11-19 11:19:53.192805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.074 [2024-11-19 11:19:53.192814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2500580, cid 3, qid 0 00:25:45.074 [2024-11-19 11:19:53.196870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.074 [2024-11-19 11:19:53.196880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.074 [2024-11-19 11:19:53.196883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.074 [2024-11-19 11:19:53.196887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2500580) on tqpair=0x249e550 00:25:45.074 [2024-11-19 11:19:53.196895] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:25:45.074 0% 00:25:45.074 Data Units Read: 0 00:25:45.074 Data Units Written: 0 00:25:45.074 Host Read Commands: 0 00:25:45.074 Host Write Commands: 0 00:25:45.074 Controller Busy Time: 0 minutes 00:25:45.074 Power Cycles: 0 00:25:45.074 Power On Hours: 0 hours 00:25:45.074 Unsafe Shutdowns: 0 00:25:45.074 Unrecoverable Media Errors: 0 00:25:45.074 Lifetime Error Log Entries: 0 00:25:45.074 Warning Temperature Time: 0 minutes 00:25:45.074 Critical Temperature Time: 0 minutes 00:25:45.074 00:25:45.074 Number of Queues 00:25:45.074 ================ 00:25:45.074 Number of I/O Submission Queues: 127 00:25:45.074 Number of I/O Completion Queues: 127 00:25:45.074 00:25:45.074 Active Namespaces 00:25:45.074 ================= 00:25:45.074 Namespace ID:1 00:25:45.074 Error Recovery Timeout: Unlimited 00:25:45.074 Command Set Identifier: NVM (00h) 00:25:45.074 Deallocate: Supported 00:25:45.074 Deallocated/Unwritten Error: Not Supported 00:25:45.074 Deallocated Read Value: Unknown 00:25:45.074 Deallocate in Write Zeroes: Not Supported 00:25:45.074 Deallocated Guard Field: 0xFFFF 00:25:45.074 Flush: Supported 00:25:45.074 Reservation: Supported 00:25:45.074 Namespace Sharing Capabilities: Multiple Controllers 00:25:45.074 Size (in LBAs): 131072 (0GiB) 00:25:45.074 Capacity (in LBAs): 131072 (0GiB) 00:25:45.074 Utilization (in LBAs): 131072 (0GiB) 00:25:45.074 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:45.074 EUI64: ABCDEF0123456789 00:25:45.074 UUID: c88d42e4-4744-407d-a9e8-763af88d61bd 00:25:45.074 Thin Provisioning: Not Supported 00:25:45.074 Per-NS Atomic Units: Yes 00:25:45.074 Atomic Boundary Size (Normal): 0 00:25:45.074 Atomic Boundary Size (PFail): 0 00:25:45.074 Atomic Boundary Offset: 0 00:25:45.074 Maximum Single Source Range Length: 65535 00:25:45.074 Maximum Copy Length: 65535 00:25:45.074 Maximum Source Range Count: 1 00:25:45.074 NGUID/EUI64 Never Reused: No 00:25:45.074 Namespace Write Protected: No 00:25:45.074 Number of LBA Formats: 1 00:25:45.074 Current LBA Format: LBA Format #00 00:25:45.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:45.074 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:45.074 rmmod nvme_tcp 00:25:45.074 rmmod nvme_fabrics 00:25:45.074 rmmod nvme_keyring 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 43949 ']' 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 43949 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 43949 ']' 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 43949 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 43949 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 43949' 00:25:45.074 killing process with pid 43949 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 43949 00:25:45.074 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 43949 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.335 11:19:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.245 11:19:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.245 00:25:47.245 real 0m12.710s 00:25:47.245 user 0m9.139s 00:25:47.245 sys 0m6.914s 00:25:47.245 11:19:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.245 11:19:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.245 ************************************ 00:25:47.245 END TEST nvmf_identify 00:25:47.245 ************************************ 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.505 ************************************ 00:25:47.505 START TEST nvmf_perf 00:25:47.505 ************************************ 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:47.505 * Looking for test storage... 00:25:47.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.505 --rc genhtml_branch_coverage=1 00:25:47.505 --rc genhtml_function_coverage=1 00:25:47.505 --rc genhtml_legend=1 00:25:47.505 --rc geninfo_all_blocks=1 00:25:47.505 --rc geninfo_unexecuted_blocks=1 00:25:47.505 00:25:47.505 ' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.505 --rc genhtml_branch_coverage=1 00:25:47.505 --rc genhtml_function_coverage=1 00:25:47.505 --rc genhtml_legend=1 00:25:47.505 --rc geninfo_all_blocks=1 00:25:47.505 --rc geninfo_unexecuted_blocks=1 00:25:47.505 00:25:47.505 ' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.505 --rc genhtml_branch_coverage=1 00:25:47.505 --rc genhtml_function_coverage=1 00:25:47.505 --rc genhtml_legend=1 00:25:47.505 --rc geninfo_all_blocks=1 00:25:47.505 --rc geninfo_unexecuted_blocks=1 00:25:47.505 00:25:47.505 ' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.505 --rc genhtml_branch_coverage=1 00:25:47.505 --rc genhtml_function_coverage=1 00:25:47.505 --rc genhtml_legend=1 00:25:47.505 --rc geninfo_all_blocks=1 00:25:47.505 --rc geninfo_unexecuted_blocks=1 00:25:47.505 00:25:47.505 ' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.505 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.506 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.766 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.766 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.766 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.766 11:19:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:55.902 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.902 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:55.903 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:55.903 Found net devices under 0000:31:00.0: cvl_0_0 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:55.903 Found net devices under 0000:31:00.1: cvl_0_1 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:55.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:25:55.903 00:25:55.903 --- 10.0.0.2 ping statistics --- 00:25:55.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.903 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:25:55.903 00:25:55.903 --- 10.0.0.1 ping statistics --- 00:25:55.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.903 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=48785 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 48785 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 48785 ']' 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:55.903 11:20:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:55.903 [2024-11-19 11:20:03.783502] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:25:55.903 [2024-11-19 11:20:03.783596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.903 [2024-11-19 11:20:03.878703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.903 [2024-11-19 11:20:03.920581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.903 [2024-11-19 11:20:03.920620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.903 [2024-11-19 11:20:03.920629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.903 [2024-11-19 11:20:03.920636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.903 [2024-11-19 11:20:03.920642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.903 [2024-11-19 11:20:03.922271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.903 [2024-11-19 11:20:03.922392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.903 [2024-11-19 11:20:03.922549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.903 [2024-11-19 11:20:03.922550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.473 11:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.473 11:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:56.474 11:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:56.474 11:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.474 11:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:56.474 11:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.474 11:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:56.474 11:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:57.044 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:57.044 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:57.044 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:57.044 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:57.304 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:57.304 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:57.304 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:57.304 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:57.304 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:57.565 [2024-11-19 11:20:05.666971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.565 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:57.565 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:57.565 11:20:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.827 11:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:57.827 11:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:58.087 11:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.087 [2024-11-19 11:20:06.385549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.087 11:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:58.348 11:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:58.348 11:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:58.348 11:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:58.348 11:20:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:59.737 Initializing NVMe Controllers 00:25:59.737 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:59.737 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:59.737 Initialization complete. Launching workers. 00:25:59.737 ======================================================== 00:25:59.737 Latency(us) 00:25:59.737 Device Information : IOPS MiB/s Average min max 00:25:59.737 PCIE (0000:65:00.0) NSID 1 from core 0: 79269.40 309.65 403.01 13.25 5015.23 00:25:59.737 ======================================================== 00:25:59.737 Total : 79269.40 309.65 403.01 13.25 5015.23 00:25:59.737 00:25:59.737 11:20:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:01.123 Initializing NVMe Controllers 00:26:01.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:01.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:01.123 Initialization complete. Launching workers. 00:26:01.123 ======================================================== 00:26:01.123 Latency(us) 00:26:01.123 Device Information : IOPS MiB/s Average min max 00:26:01.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11914.16 225.81 45948.58 00:26:01.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16468.87 7921.19 47894.74 00:26:01.123 ======================================================== 00:26:01.123 Total : 148.00 0.58 13791.44 225.81 47894.74 00:26:01.123 00:26:01.123 11:20:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:02.509 Initializing NVMe Controllers 00:26:02.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:02.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:02.509 Initialization complete. Launching workers. 00:26:02.509 ======================================================== 00:26:02.509 Latency(us) 00:26:02.509 Device Information : IOPS MiB/s Average min max 00:26:02.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11259.34 43.98 2843.48 494.53 8578.36 00:26:02.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3810.44 14.88 8441.95 6817.11 16917.44 00:26:02.509 ======================================================== 00:26:02.509 Total : 15069.78 58.87 4259.07 494.53 16917.44 00:26:02.509 00:26:02.509 11:20:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:02.509 11:20:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:02.509 11:20:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:05.056 Initializing NVMe Controllers 00:26:05.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.056 Controller IO queue size 128, less than required. 00:26:05.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:05.056 Controller IO queue size 128, less than required. 00:26:05.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:05.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:05.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:05.056 Initialization complete. Launching workers. 00:26:05.056 ======================================================== 00:26:05.056 Latency(us) 00:26:05.056 Device Information : IOPS MiB/s Average min max 00:26:05.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1621.52 405.38 80265.97 51741.13 124106.71 00:26:05.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.42 147.35 227623.05 54376.05 344855.08 00:26:05.056 ======================================================== 00:26:05.056 Total : 2210.94 552.74 119550.10 51741.13 344855.08 00:26:05.056 00:26:05.056 11:20:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:05.056 No valid NVMe controllers or AIO or URING devices found 00:26:05.056 Initializing NVMe Controllers 00:26:05.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.056 Controller IO queue size 128, less than required. 00:26:05.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:05.056 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:05.056 Controller IO queue size 128, less than required. 00:26:05.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:05.056 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:05.056 WARNING: Some requested NVMe devices were skipped 00:26:05.056 11:20:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:07.603 Initializing NVMe Controllers 00:26:07.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:07.603 Controller IO queue size 128, less than required. 00:26:07.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:07.603 Controller IO queue size 128, less than required. 00:26:07.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:07.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:07.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:07.603 Initialization complete. Launching workers. 00:26:07.603 00:26:07.603 ==================== 00:26:07.603 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:07.603 TCP transport: 00:26:07.603 polls: 23389 00:26:07.603 idle_polls: 13138 00:26:07.603 sock_completions: 10251 00:26:07.603 nvme_completions: 6373 00:26:07.603 submitted_requests: 9522 00:26:07.603 queued_requests: 1 00:26:07.603 00:26:07.603 ==================== 00:26:07.603 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:07.603 TCP transport: 00:26:07.603 polls: 25657 00:26:07.603 idle_polls: 15410 00:26:07.603 sock_completions: 10247 00:26:07.603 nvme_completions: 6501 00:26:07.603 submitted_requests: 9802 00:26:07.603 queued_requests: 1 00:26:07.603 ======================================================== 00:26:07.603 Latency(us) 00:26:07.603 Device Information : IOPS MiB/s Average min max 00:26:07.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1592.95 398.24 81326.02 45996.46 153821.99 00:26:07.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1624.95 406.24 79287.56 35779.39 138676.44 00:26:07.603 ======================================================== 00:26:07.603 Total : 3217.90 804.47 80296.66 35779.39 153821.99 00:26:07.603 00:26:07.603 11:20:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:07.603 11:20:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:07.864 rmmod nvme_tcp 00:26:07.864 rmmod nvme_fabrics 00:26:07.864 rmmod nvme_keyring 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 48785 ']' 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 48785 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 48785 ']' 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 48785 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:07.864 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48785 00:26:08.125 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.125 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.125 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48785' 00:26:08.125 killing process with pid 48785 00:26:08.125 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 48785 00:26:08.125 11:20:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 48785 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.040 11:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:12.588 00:26:12.588 real 0m24.650s 00:26:12.588 user 0m58.478s 00:26:12.588 sys 0m8.976s 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:12.588 ************************************ 00:26:12.588 END TEST nvmf_perf 00:26:12.588 ************************************ 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.588 ************************************ 00:26:12.588 START TEST nvmf_fio_host 00:26:12.588 ************************************ 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:12.588 * Looking for test storage... 00:26:12.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.588 --rc genhtml_branch_coverage=1 00:26:12.588 --rc genhtml_function_coverage=1 00:26:12.588 --rc genhtml_legend=1 00:26:12.588 --rc geninfo_all_blocks=1 00:26:12.588 --rc geninfo_unexecuted_blocks=1 00:26:12.588 00:26:12.588 ' 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.588 --rc genhtml_branch_coverage=1 00:26:12.588 --rc genhtml_function_coverage=1 00:26:12.588 --rc genhtml_legend=1 00:26:12.588 --rc geninfo_all_blocks=1 00:26:12.588 --rc geninfo_unexecuted_blocks=1 00:26:12.588 00:26:12.588 ' 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.588 --rc genhtml_branch_coverage=1 00:26:12.588 --rc genhtml_function_coverage=1 00:26:12.588 --rc genhtml_legend=1 00:26:12.588 --rc geninfo_all_blocks=1 00:26:12.588 --rc geninfo_unexecuted_blocks=1 00:26:12.588 00:26:12.588 ' 00:26:12.588 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.588 --rc genhtml_branch_coverage=1 00:26:12.588 --rc genhtml_function_coverage=1 00:26:12.589 --rc genhtml_legend=1 00:26:12.589 --rc geninfo_all_blocks=1 00:26:12.589 --rc geninfo_unexecuted_blocks=1 00:26:12.589 00:26:12.589 ' 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.589 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.590 11:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:20.740 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:20.740 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:20.740 Found net devices under 0000:31:00.0: cvl_0_0 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:20.740 Found net devices under 0000:31:00.1: cvl_0_1 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.740 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.741 11:20:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:26:20.741 00:26:20.741 --- 10.0.0.2 ping statistics --- 00:26:20.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.741 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:26:20.741 00:26:20.741 --- 10.0.0.1 ping statistics --- 00:26:20.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.741 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=56518 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 56518 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 56518 ']' 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.741 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:21.004 [2024-11-19 11:20:29.139591] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:26:21.004 [2024-11-19 11:20:29.139658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.004 [2024-11-19 11:20:29.231838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.004 [2024-11-19 11:20:29.272889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.004 [2024-11-19 11:20:29.272927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.004 [2024-11-19 11:20:29.272935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.004 [2024-11-19 11:20:29.272942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.004 [2024-11-19 11:20:29.272949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.004 [2024-11-19 11:20:29.274576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.004 [2024-11-19 11:20:29.274690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.004 [2024-11-19 11:20:29.274846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.004 [2024-11-19 11:20:29.274847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.950 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.950 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:21.950 11:20:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:21.950 [2024-11-19 11:20:30.106000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.950 11:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:21.950 11:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:21.950 11:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.950 11:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:22.211 Malloc1 00:26:22.211 11:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.471 11:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:22.471 11:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.731 [2024-11-19 11:20:30.915575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.731 11:20:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:22.991 11:20:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:23.251 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:23.251 fio-3.35 00:26:23.251 Starting 1 thread 00:26:25.822 00:26:25.822 test: (groupid=0, jobs=1): err= 0: pid=57058: Tue Nov 19 11:20:33 2024 00:26:25.822 read: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(90.0MiB/2005msec) 00:26:25.822 slat (usec): min=2, max=279, avg= 2.17, stdev= 2.59 00:26:25.822 clat (usec): min=3676, max=9123, avg=6141.19, stdev=1203.17 00:26:25.822 lat (usec): min=3710, max=9125, avg=6143.36, stdev=1203.17 00:26:25.822 clat percentiles (usec): 00:26:25.822 | 1.00th=[ 4424], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 4948], 00:26:25.822 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5604], 60.00th=[ 6783], 00:26:25.822 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 7963], 00:26:25.822 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8717], 99.95th=[ 8848], 00:26:25.822 | 99.99th=[ 9110] 00:26:25.822 bw ( KiB/s): min=37144, max=56296, per=99.96%, avg=45950.00, stdev=9288.54, samples=4 00:26:25.822 iops : min= 9286, max=14074, avg=11487.50, stdev=2322.14, samples=4 00:26:25.822 write: IOPS=11.4k, BW=44.6MiB/s (46.7MB/s)(89.3MiB/2005msec); 0 zone resets 00:26:25.822 slat (usec): min=2, max=267, avg= 2.23, stdev= 1.95 00:26:25.822 clat (usec): min=2880, max=8246, avg=4954.19, stdev=970.60 00:26:25.822 lat (usec): min=2897, max=8248, avg=4956.42, stdev=970.64 00:26:25.822 clat percentiles (usec): 00:26:25.822 | 1.00th=[ 3523], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 4015], 00:26:25.822 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4490], 60.00th=[ 5473], 00:26:25.822 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6259], 95.00th=[ 6390], 00:26:25.822 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7111], 99.95th=[ 7570], 00:26:25.822 | 99.99th=[ 8225] 00:26:25.822 bw ( KiB/s): min=37960, max=55184, per=99.98%, avg=45622.00, stdev=8809.89, samples=4 00:26:25.822 iops : min= 9490, max=13796, avg=11405.50, stdev=2202.47, samples=4 00:26:25.822 lat (msec) : 4=9.48%, 10=90.52% 00:26:25.822 cpu : usr=72.41%, sys=26.30%, ctx=25, majf=0, minf=17 00:26:25.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:25.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:25.822 issued rwts: total=23041,22872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:25.822 00:26:25.822 Run status group 0 (all jobs): 00:26:25.822 READ: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=90.0MiB (94.4MB), run=2005-2005msec 00:26:25.822 WRITE: bw=44.6MiB/s (46.7MB/s), 44.6MiB/s-44.6MiB/s (46.7MB/s-46.7MB/s), io=89.3MiB (93.7MB), run=2005-2005msec 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:25.822 11:20:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:26.094 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:26.094 fio-3.35 00:26:26.094 Starting 1 thread 00:26:28.642 00:26:28.642 test: (groupid=0, jobs=1): err= 0: pid=57882: Tue Nov 19 11:20:36 2024 00:26:28.642 read: IOPS=9202, BW=144MiB/s (151MB/s)(288MiB/2003msec) 00:26:28.642 slat (usec): min=3, max=113, avg= 3.62, stdev= 1.59 00:26:28.642 clat (usec): min=1971, max=48736, avg=8287.87, stdev=3228.05 00:26:28.642 lat (usec): min=1974, max=48740, avg=8291.49, stdev=3228.12 00:26:28.642 clat percentiles (usec): 00:26:28.642 | 1.00th=[ 3982], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6390], 00:26:28.642 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8586], 00:26:28.642 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11600], 00:26:28.642 | 99.00th=[13173], 99.50th=[14484], 99.90th=[47449], 99.95th=[48497], 00:26:28.642 | 99.99th=[48497] 00:26:28.642 bw ( KiB/s): min=64096, max=81344, per=49.65%, avg=73112.00, stdev=8479.71, samples=4 00:26:28.642 iops : min= 4006, max= 5084, avg=4569.50, stdev=529.98, samples=4 00:26:28.642 write: IOPS=5343, BW=83.5MiB/s (87.5MB/s)(150MiB/1792msec); 0 zone resets 00:26:28.642 slat (usec): min=39, max=359, avg=40.90, stdev= 6.97 00:26:28.642 clat (usec): min=2474, max=50254, avg=9843.78, stdev=3122.13 00:26:28.642 lat (usec): min=2514, max=50299, avg=9884.69, stdev=3122.64 00:26:28.642 clat percentiles (usec): 00:26:28.642 | 1.00th=[ 6521], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 8225], 00:26:28.642 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:26:28.642 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11994], 95.00th=[13042], 00:26:28.642 | 99.00th=[15533], 99.50th=[16909], 99.90th=[49546], 99.95th=[50070], 00:26:28.642 | 99.99th=[50070] 00:26:28.642 bw ( KiB/s): min=66688, max=84864, per=89.07%, avg=76144.00, stdev=8908.98, samples=4 00:26:28.642 iops : min= 4168, max= 5304, avg=4759.00, stdev=556.81, samples=4 00:26:28.642 lat (msec) : 2=0.01%, 4=0.74%, 10=74.35%, 20=24.45%, 50=0.44% 00:26:28.642 lat (msec) : 100=0.01% 00:26:28.642 cpu : usr=85.52%, sys=13.53%, ctx=19, majf=0, minf=35 00:26:28.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:28.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:28.642 issued rwts: total=18433,9575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:28.642 00:26:28.642 Run status group 0 (all jobs): 00:26:28.642 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=288MiB (302MB), run=2003-2003msec 00:26:28.642 WRITE: bw=83.5MiB/s (87.5MB/s), 83.5MiB/s-83.5MiB/s (87.5MB/s-87.5MB/s), io=150MiB (157MB), run=1792-1792msec 00:26:28.642 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.642 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:28.642 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:28.643 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:28.643 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:28.643 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.643 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:28.643 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.643 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:28.643 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.643 11:20:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.643 rmmod nvme_tcp 00:26:28.643 rmmod nvme_fabrics 00:26:28.643 rmmod nvme_keyring 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 56518 ']' 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 56518 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 56518 ']' 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 56518 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56518 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56518' 00:26:28.904 killing process with pid 56518 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 56518 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 56518 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.904 11:20:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:31.449 00:26:31.449 real 0m18.895s 00:26:31.449 user 1m9.502s 00:26:31.449 sys 0m8.465s 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.449 ************************************ 00:26:31.449 END TEST nvmf_fio_host 00:26:31.449 ************************************ 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.449 ************************************ 00:26:31.449 START TEST nvmf_failover 00:26:31.449 ************************************ 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:31.449 * Looking for test storage... 00:26:31.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.449 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:31.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.449 --rc genhtml_branch_coverage=1 00:26:31.449 --rc genhtml_function_coverage=1 00:26:31.450 --rc genhtml_legend=1 00:26:31.450 --rc geninfo_all_blocks=1 00:26:31.450 --rc geninfo_unexecuted_blocks=1 00:26:31.450 00:26:31.450 ' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:31.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.450 --rc genhtml_branch_coverage=1 00:26:31.450 --rc genhtml_function_coverage=1 00:26:31.450 --rc genhtml_legend=1 00:26:31.450 --rc geninfo_all_blocks=1 00:26:31.450 --rc geninfo_unexecuted_blocks=1 00:26:31.450 00:26:31.450 ' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:31.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.450 --rc genhtml_branch_coverage=1 00:26:31.450 --rc genhtml_function_coverage=1 00:26:31.450 --rc genhtml_legend=1 00:26:31.450 --rc geninfo_all_blocks=1 00:26:31.450 --rc geninfo_unexecuted_blocks=1 00:26:31.450 00:26:31.450 ' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:31.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.450 --rc genhtml_branch_coverage=1 00:26:31.450 --rc genhtml_function_coverage=1 00:26:31.450 --rc genhtml_legend=1 00:26:31.450 --rc geninfo_all_blocks=1 00:26:31.450 --rc geninfo_unexecuted_blocks=1 00:26:31.450 00:26:31.450 ' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.450 11:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:39.735 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.735 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:39.736 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:39.736 Found net devices under 0000:31:00.0: cvl_0_0 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:39.736 Found net devices under 0000:31:00.1: cvl_0_1 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:26:39.736 00:26:39.736 --- 10.0.0.2 ping statistics --- 00:26:39.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.736 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:26:39.736 00:26:39.736 --- 10.0.0.1 ping statistics --- 00:26:39.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.736 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=62898 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 62898 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 62898 ']' 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:39.736 11:20:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:39.736 [2024-11-19 11:20:47.468376] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:26:39.736 [2024-11-19 11:20:47.468441] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.736 [2024-11-19 11:20:47.578011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:39.736 [2024-11-19 11:20:47.629081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.737 [2024-11-19 11:20:47.629136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.737 [2024-11-19 11:20:47.629144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.737 [2024-11-19 11:20:47.629151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.737 [2024-11-19 11:20:47.629157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.737 [2024-11-19 11:20:47.630955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.737 [2024-11-19 11:20:47.631123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.737 [2024-11-19 11:20:47.631124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.998 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.998 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:39.998 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.998 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.998 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:39.998 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.998 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:40.259 [2024-11-19 11:20:48.470471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.259 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:40.520 Malloc0 00:26:40.520 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.781 11:20:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.781 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.043 [2024-11-19 11:20:49.213267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.043 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:41.043 [2024-11-19 11:20:49.381667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:41.304 [2024-11-19 11:20:49.554153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=63265 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 63265 /var/tmp/bdevperf.sock 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 63265 ']' 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.304 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:41.564 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.564 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:41.564 11:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:41.824 NVMe0n1 00:26:41.824 11:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:42.084 00:26:42.084 11:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=63593 00:26:42.084 11:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:42.084 11:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:43.469 11:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.469 [2024-11-19 11:20:51.572171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.469 [2024-11-19 11:20:51.572371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 [2024-11-19 11:20:51.572541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:26:43.470 11:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:46.774 11:20:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:46.774 00:26:46.774 11:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:47.035 [2024-11-19 11:20:55.168824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.168996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 [2024-11-19 11:20:55.169062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8140 is same with the state(6) to be set 00:26:47.036 11:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:50.339 11:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.339 [2024-11-19 11:20:58.360711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.339 11:20:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:51.282 11:20:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:51.282 [2024-11-19 11:20:59.554504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.282 [2024-11-19 11:20:59.554631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 [2024-11-19 11:20:59.554937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a9090 is same with the state(6) to be set 00:26:51.283 11:20:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 63593 00:26:57.875 { 00:26:57.875 "results": [ 00:26:57.875 { 00:26:57.875 "job": "NVMe0n1", 00:26:57.875 "core_mask": "0x1", 00:26:57.875 "workload": "verify", 00:26:57.875 "status": "finished", 00:26:57.875 "verify_range": { 00:26:57.875 "start": 0, 00:26:57.875 "length": 16384 00:26:57.875 }, 00:26:57.875 "queue_depth": 128, 00:26:57.875 "io_size": 4096, 00:26:57.875 "runtime": 15.004513, 00:26:57.875 "iops": 11050.808513411932, 00:26:57.875 "mibps": 43.16722075551536, 00:26:57.875 "io_failed": 8821, 00:26:57.875 "io_timeout": 0, 00:26:57.875 "avg_latency_us": 10969.90054037133, 00:26:57.875 "min_latency_us": 552.96, 00:26:57.875 "max_latency_us": 22282.24 00:26:57.875 } 00:26:57.875 ], 00:26:57.875 "core_count": 1 00:26:57.875 } 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 63265 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 63265 ']' 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 63265 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63265 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63265' 00:26:57.875 killing process with pid 63265 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 63265 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 63265 00:26:57.875 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:57.875 [2024-11-19 11:20:49.625549] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:26:57.875 [2024-11-19 11:20:49.625607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63265 ] 00:26:57.875 [2024-11-19 11:20:49.703952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.875 [2024-11-19 11:20:49.740002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.875 Running I/O for 15 seconds... 00:26:57.875 10951.00 IOPS, 42.78 MiB/s [2024-11-19T10:21:06.227Z] [2024-11-19 11:20:51.573996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.875 [2024-11-19 11:20:51.574271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.875 [2024-11-19 11:20:51.574279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.876 [2024-11-19 11:20:51.574949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.876 [2024-11-19 11:20:51.574958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.877 [2024-11-19 11:20:51.574965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.574974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.877 [2024-11-19 11:20:51.574981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.574990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.877 [2024-11-19 11:20:51.574998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.877 [2024-11-19 11:20:51.575608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.877 [2024-11-19 11:20:51.575614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.575984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.575993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.878 [2024-11-19 11:20:51.576165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.878 [2024-11-19 11:20:51.576194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.878 [2024-11-19 11:20:51.576200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95424 len:8 PRP1 0x0 PRP2 0x0 00:26:57.878 [2024-11-19 11:20:51.576208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576249] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:57.878 [2024-11-19 11:20:51.576271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.878 [2024-11-19 11:20:51.576279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.878 [2024-11-19 11:20:51.576295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.878 [2024-11-19 11:20:51.576310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.878 [2024-11-19 11:20:51.576319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.878 [2024-11-19 11:20:51.576326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:51.576334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:57.879 [2024-11-19 11:20:51.576363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a1d80 (9): Bad file descriptor 00:26:57.879 [2024-11-19 11:20:51.579869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:57.879 [2024-11-19 11:20:51.698510] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:57.879 10501.50 IOPS, 41.02 MiB/s [2024-11-19T10:21:06.231Z] 10716.67 IOPS, 41.86 MiB/s [2024-11-19T10:21:06.231Z] 10894.50 IOPS, 42.56 MiB/s [2024-11-19T10:21:06.231Z] [2024-11-19 11:20:55.169785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.169984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.169992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.879 [2024-11-19 11:20:55.170195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.879 [2024-11-19 11:20:55.170212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.879 [2024-11-19 11:20:55.170229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.879 [2024-11-19 11:20:55.170246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.879 [2024-11-19 11:20:55.170262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.879 [2024-11-19 11:20:55.170279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.879 [2024-11-19 11:20:55.170296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.879 [2024-11-19 11:20:55.170313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.879 [2024-11-19 11:20:55.170329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.879 [2024-11-19 11:20:55.170338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.880 [2024-11-19 11:20:55.170563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.880 [2024-11-19 11:20:55.170992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.880 [2024-11-19 11:20:55.170999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.881 [2024-11-19 11:20:55.171611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.881 [2024-11-19 11:20:55.171640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51376 len:8 PRP1 0x0 PRP2 0x0 00:26:57.881 [2024-11-19 11:20:55.171647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.881 [2024-11-19 11:20:55.171658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.881 [2024-11-19 11:20:55.171663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.881 [2024-11-19 11:20:55.171669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51384 len:8 PRP1 0x0 PRP2 0x0 00:26:57.881 [2024-11-19 11:20:55.171676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51392 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51400 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51408 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51416 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51424 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51432 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51440 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51448 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51456 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51464 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51472 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.171975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.171983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.171990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.171997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51480 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.172004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.172011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.172017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.172023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50648 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.172031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.172038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.172044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.172050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50656 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.172057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.172064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.172070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.172076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50664 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.183111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.183152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.183161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50672 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.183168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.183182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.183188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50680 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.183196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.183209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.183215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50688 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.183222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.882 [2024-11-19 11:20:55.183235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.882 [2024-11-19 11:20:55.183241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50696 len:8 PRP1 0x0 PRP2 0x0 00:26:57.882 [2024-11-19 11:20:55.183248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183294] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:57.882 [2024-11-19 11:20:55.183325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.882 [2024-11-19 11:20:55.183334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.882 [2024-11-19 11:20:55.183351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.882 [2024-11-19 11:20:55.183366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.882 [2024-11-19 11:20:55.183382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.882 [2024-11-19 11:20:55.183389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:57.882 [2024-11-19 11:20:55.183430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a1d80 (9): Bad file descriptor 00:26:57.882 [2024-11-19 11:20:55.186939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:57.882 [2024-11-19 11:20:55.256231] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:57.882 10820.00 IOPS, 42.27 MiB/s [2024-11-19T10:21:06.234Z] 10873.67 IOPS, 42.48 MiB/s [2024-11-19T10:21:06.234Z] 10919.43 IOPS, 42.65 MiB/s [2024-11-19T10:21:06.234Z] 10954.00 IOPS, 42.79 MiB/s [2024-11-19T10:21:06.234Z] 10988.00 IOPS, 42.92 MiB/s [2024-11-19T10:21:06.234Z] [2024-11-19 11:20:59.557286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.883 [2024-11-19 11:20:59.557991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.883 [2024-11-19 11:20:59.557999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.884 [2024-11-19 11:20:59.558015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.884 [2024-11-19 11:20:59.558032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.884 [2024-11-19 11:20:59.558048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.884 [2024-11-19 11:20:59.558067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.884 [2024-11-19 11:20:59.558564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.884 [2024-11-19 11:20:59.558573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.885 [2024-11-19 11:20:59.558964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.558985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.558994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67960 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67968 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67976 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67984 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67992 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68000 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68008 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68016 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68024 len:8 PRP1 0x0 PRP2 0x0 00:26:57.885 [2024-11-19 11:20:59.559214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.885 [2024-11-19 11:20:59.559222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.885 [2024-11-19 11:20:59.559227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.885 [2024-11-19 11:20:59.559233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68032 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68040 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68048 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68056 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68064 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68088 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68096 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68104 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68112 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.559510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.559515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.559521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68120 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.559528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68128 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68136 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68144 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68152 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68160 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68168 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68176 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68184 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.886 [2024-11-19 11:20:59.571916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.886 [2024-11-19 11:20:59.571922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68192 len:8 PRP1 0x0 PRP2 0x0 00:26:57.886 [2024-11-19 11:20:59.571929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.571974] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:57.886 [2024-11-19 11:20:59.572004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.886 [2024-11-19 11:20:59.572012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.572022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.886 [2024-11-19 11:20:59.572029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.572037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.886 [2024-11-19 11:20:59.572045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.572053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.886 [2024-11-19 11:20:59.572060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.886 [2024-11-19 11:20:59.572068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:57.886 [2024-11-19 11:20:59.572107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a1d80 (9): Bad file descriptor 00:26:57.887 [2024-11-19 11:20:59.575610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:57.887 [2024-11-19 11:20:59.607456] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:57.887 10992.90 IOPS, 42.94 MiB/s [2024-11-19T10:21:06.239Z] 11006.91 IOPS, 43.00 MiB/s [2024-11-19T10:21:06.239Z] 11013.50 IOPS, 43.02 MiB/s [2024-11-19T10:21:06.239Z] 11028.54 IOPS, 43.08 MiB/s [2024-11-19T10:21:06.239Z] 11037.14 IOPS, 43.11 MiB/s 00:26:57.887 Latency(us) 00:26:57.887 [2024-11-19T10:21:06.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.887 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.887 Verification LBA range: start 0x0 length 0x4000 00:26:57.887 NVMe0n1 : 15.00 11050.81 43.17 587.89 0.00 10969.90 552.96 22282.24 00:26:57.887 [2024-11-19T10:21:06.239Z] =================================================================================================================== 00:26:57.887 [2024-11-19T10:21:06.239Z] Total : 11050.81 43.17 587.89 0.00 10969.90 552.96 22282.24 00:26:57.887 Received shutdown signal, test time was about 15.000000 seconds 00:26:57.887 00:26:57.887 Latency(us) 00:26:57.887 [2024-11-19T10:21:06.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.887 [2024-11-19T10:21:06.239Z] =================================================================================================================== 00:26:57.887 [2024-11-19T10:21:06.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=66557 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 66557 /var/tmp/bdevperf.sock 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 66557 ']' 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:57.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.887 11:21:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:58.457 11:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.457 11:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:58.458 11:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:58.458 [2024-11-19 11:21:06.755530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:58.458 11:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:58.717 [2024-11-19 11:21:06.931956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:58.718 11:21:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:58.977 NVMe0n1 00:26:58.977 11:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:59.238 00:26:59.238 11:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:59.498 00:26:59.758 11:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:59.758 11:21:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:59.758 11:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:00.018 11:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:03.317 11:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:03.317 11:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:03.317 11:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=68173 00:27:03.317 11:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:03.317 11:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 68173 00:27:04.260 { 00:27:04.260 "results": [ 00:27:04.260 { 00:27:04.260 "job": "NVMe0n1", 00:27:04.260 "core_mask": "0x1", 00:27:04.260 "workload": "verify", 00:27:04.260 "status": "finished", 00:27:04.260 "verify_range": { 00:27:04.260 "start": 0, 00:27:04.260 "length": 16384 00:27:04.260 }, 00:27:04.260 "queue_depth": 128, 00:27:04.260 "io_size": 4096, 00:27:04.260 "runtime": 1.007108, 00:27:04.260 "iops": 11125.916982091296, 00:27:04.260 "mibps": 43.460613211294124, 00:27:04.260 "io_failed": 0, 00:27:04.260 "io_timeout": 0, 00:27:04.260 "avg_latency_us": 11429.923945857503, 00:27:04.260 "min_latency_us": 1713.4933333333333, 00:27:04.260 "max_latency_us": 11250.346666666666 00:27:04.260 } 00:27:04.260 ], 00:27:04.260 "core_count": 1 00:27:04.260 } 00:27:04.260 11:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:04.260 [2024-11-19 11:21:05.807582] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:27:04.260 [2024-11-19 11:21:05.807641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66557 ] 00:27:04.260 [2024-11-19 11:21:05.885715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.260 [2024-11-19 11:21:05.921579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.260 [2024-11-19 11:21:08.193971] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:04.260 [2024-11-19 11:21:08.194016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.260 [2024-11-19 11:21:08.194027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.260 [2024-11-19 11:21:08.194038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.260 [2024-11-19 11:21:08.194046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.260 [2024-11-19 11:21:08.194054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.260 [2024-11-19 11:21:08.194061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.260 [2024-11-19 11:21:08.194069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.260 [2024-11-19 11:21:08.194076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.260 [2024-11-19 11:21:08.194088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:04.260 [2024-11-19 11:21:08.194113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:04.260 [2024-11-19 11:21:08.194128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6ed80 (9): Bad file descriptor 00:27:04.260 [2024-11-19 11:21:08.243082] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:04.260 Running I/O for 1 seconds... 00:27:04.260 11077.00 IOPS, 43.27 MiB/s 00:27:04.260 Latency(us) 00:27:04.260 [2024-11-19T10:21:12.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.260 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.260 Verification LBA range: start 0x0 length 0x4000 00:27:04.260 NVMe0n1 : 1.01 11125.92 43.46 0.00 0.00 11429.92 1713.49 11250.35 00:27:04.260 [2024-11-19T10:21:12.612Z] =================================================================================================================== 00:27:04.260 [2024-11-19T10:21:12.612Z] Total : 11125.92 43.46 0.00 0.00 11429.92 1713.49 11250.35 00:27:04.260 11:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:04.260 11:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:04.521 11:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:04.781 11:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:04.781 11:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:04.781 11:21:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:05.041 11:21:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 66557 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 66557 ']' 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 66557 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66557 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66557' 00:27:08.338 killing process with pid 66557 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 66557 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 66557 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:08.338 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.598 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:08.598 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:08.598 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:08.598 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.598 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:08.598 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.598 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:08.598 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.599 rmmod nvme_tcp 00:27:08.599 rmmod nvme_fabrics 00:27:08.599 rmmod nvme_keyring 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 62898 ']' 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 62898 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 62898 ']' 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 62898 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.599 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62898 00:27:08.859 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:08.859 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:08.859 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62898' 00:27:08.859 killing process with pid 62898 00:27:08.859 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 62898 00:27:08.859 11:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 62898 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.859 11:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.403 11:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:11.403 00:27:11.403 real 0m39.807s 00:27:11.403 user 2m0.352s 00:27:11.404 sys 0m9.047s 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:11.404 ************************************ 00:27:11.404 END TEST nvmf_failover 00:27:11.404 ************************************ 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.404 ************************************ 00:27:11.404 START TEST nvmf_host_discovery 00:27:11.404 ************************************ 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:11.404 * Looking for test storage... 00:27:11.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:11.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.404 --rc genhtml_branch_coverage=1 00:27:11.404 --rc genhtml_function_coverage=1 00:27:11.404 --rc genhtml_legend=1 00:27:11.404 --rc geninfo_all_blocks=1 00:27:11.404 --rc geninfo_unexecuted_blocks=1 00:27:11.404 00:27:11.404 ' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:11.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.404 --rc genhtml_branch_coverage=1 00:27:11.404 --rc genhtml_function_coverage=1 00:27:11.404 --rc genhtml_legend=1 00:27:11.404 --rc geninfo_all_blocks=1 00:27:11.404 --rc geninfo_unexecuted_blocks=1 00:27:11.404 00:27:11.404 ' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:11.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.404 --rc genhtml_branch_coverage=1 00:27:11.404 --rc genhtml_function_coverage=1 00:27:11.404 --rc genhtml_legend=1 00:27:11.404 --rc geninfo_all_blocks=1 00:27:11.404 --rc geninfo_unexecuted_blocks=1 00:27:11.404 00:27:11.404 ' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:11.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.404 --rc genhtml_branch_coverage=1 00:27:11.404 --rc genhtml_function_coverage=1 00:27:11.404 --rc genhtml_legend=1 00:27:11.404 --rc geninfo_all_blocks=1 00:27:11.404 --rc geninfo_unexecuted_blocks=1 00:27:11.404 00:27:11.404 ' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.404 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.405 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:19.545 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:19.545 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.545 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:19.546 Found net devices under 0000:31:00.0: cvl_0_0 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:19.546 Found net devices under 0000:31:00.1: cvl_0_1 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.546 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:27:19.807 00:27:19.807 --- 10.0.0.2 ping statistics --- 00:27:19.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.807 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:27:19.807 00:27:19.807 --- 10.0.0.1 ping statistics --- 00:27:19.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.807 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=73891 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 73891 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 73891 ']' 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.807 11:21:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.807 [2024-11-19 11:21:28.034030] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:27:19.807 [2024-11-19 11:21:28.034113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.807 [2024-11-19 11:21:28.112869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.807 [2024-11-19 11:21:28.144198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.807 [2024-11-19 11:21:28.144231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.807 [2024-11-19 11:21:28.144237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.807 [2024-11-19 11:21:28.144242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.807 [2024-11-19 11:21:28.144246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.807 [2024-11-19 11:21:28.144757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 [2024-11-19 11:21:28.269636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 [2024-11-19 11:21:28.281887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 null0 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 null1 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=73917 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 73917 /tmp/host.sock 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 73917 ']' 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:20.068 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.068 11:21:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 [2024-11-19 11:21:28.378034] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:27:20.068 [2024-11-19 11:21:28.378098] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73917 ] 00:27:20.329 [2024-11-19 11:21:28.461703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.329 [2024-11-19 11:21:28.503640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.902 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.163 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.163 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:21.163 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:21.163 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.164 [2024-11-19 11:21:29.492842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.164 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:21.426 11:21:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:21.998 [2024-11-19 11:21:30.238052] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:21.998 [2024-11-19 11:21:30.238075] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:21.998 [2024-11-19 11:21:30.238088] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:21.998 [2024-11-19 11:21:30.324373] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:22.259 [2024-11-19 11:21:30.379179] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:22.259 [2024-11-19 11:21:30.380332] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2266650:1 started. 00:27:22.259 [2024-11-19 11:21:30.381968] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:22.259 [2024-11-19 11:21:30.381988] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:22.259 [2024-11-19 11:21:30.387201] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2266650 was disconnected and freed. delete nvme_qpair. 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:22.521 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.522 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.783 11:21:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.044 [2024-11-19 11:21:31.166991] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x22669d0:1 started. 00:27:23.044 [2024-11-19 11:21:31.170711] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x22669d0 was disconnected and freed. delete nvme_qpair. 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 [2024-11-19 11:21:31.257376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:23.044 [2024-11-19 11:21:31.258587] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:23.044 [2024-11-19 11:21:31.258607] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.044 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.045 [2024-11-19 11:21:31.346316] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:23.045 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.305 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:23.305 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:23.305 [2024-11-19 11:21:31.648794] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:23.305 [2024-11-19 11:21:31.648832] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:23.305 [2024-11-19 11:21:31.648841] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:23.305 [2024-11-19 11:21:31.648846] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.249 [2024-11-19 11:21:32.529278] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:24.249 [2024-11-19 11:21:32.529299] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:24.249 [2024-11-19 11:21:32.531561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.249 [2024-11-19 11:21:32.531581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.249 [2024-11-19 11:21:32.531591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.249 [2024-11-19 11:21:32.531599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.249 [2024-11-19 11:21:32.531607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.249 [2024-11-19 11:21:32.531615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.249 [2024-11-19 11:21:32.531623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.249 [2024-11-19 11:21:32.531630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.249 [2024-11-19 11:21:32.531637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:24.249 [2024-11-19 11:21:32.541574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.249 [2024-11-19 11:21:32.551610] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.249 [2024-11-19 11:21:32.551624] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.249 [2024-11-19 11:21:32.551629] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.249 [2024-11-19 11:21:32.551634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.249 [2024-11-19 11:21:32.551652] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.249 [2024-11-19 11:21:32.551977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.249 [2024-11-19 11:21:32.551994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.249 [2024-11-19 11:21:32.552002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.249 [2024-11-19 11:21:32.552014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.249 [2024-11-19 11:21:32.552026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.249 [2024-11-19 11:21:32.552032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.249 [2024-11-19 11:21:32.552040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.249 [2024-11-19 11:21:32.552047] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.249 [2024-11-19 11:21:32.552053] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.249 [2024-11-19 11:21:32.552058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.249 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.249 [2024-11-19 11:21:32.561682] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.249 [2024-11-19 11:21:32.561694] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.249 [2024-11-19 11:21:32.561699] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.249 [2024-11-19 11:21:32.561704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.249 [2024-11-19 11:21:32.561718] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.250 [2024-11-19 11:21:32.562131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.250 [2024-11-19 11:21:32.562171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.250 [2024-11-19 11:21:32.562182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.250 [2024-11-19 11:21:32.562201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.250 [2024-11-19 11:21:32.562213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.250 [2024-11-19 11:21:32.562225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.250 [2024-11-19 11:21:32.562234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.250 [2024-11-19 11:21:32.562241] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.250 [2024-11-19 11:21:32.562246] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.250 [2024-11-19 11:21:32.562251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.250 [2024-11-19 11:21:32.571752] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.250 [2024-11-19 11:21:32.571768] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.250 [2024-11-19 11:21:32.571773] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.250 [2024-11-19 11:21:32.571778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.250 [2024-11-19 11:21:32.571795] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.250 [2024-11-19 11:21:32.572024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.250 [2024-11-19 11:21:32.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.250 [2024-11-19 11:21:32.572047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.250 [2024-11-19 11:21:32.572059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.250 [2024-11-19 11:21:32.572070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.250 [2024-11-19 11:21:32.572077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.250 [2024-11-19 11:21:32.572085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.250 [2024-11-19 11:21:32.572091] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.250 [2024-11-19 11:21:32.572096] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.250 [2024-11-19 11:21:32.572100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.250 [2024-11-19 11:21:32.581826] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.250 [2024-11-19 11:21:32.581840] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.250 [2024-11-19 11:21:32.581845] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.250 [2024-11-19 11:21:32.581849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.250 [2024-11-19 11:21:32.581869] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.250 [2024-11-19 11:21:32.582204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.250 [2024-11-19 11:21:32.582217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.250 [2024-11-19 11:21:32.582224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.250 [2024-11-19 11:21:32.582235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.250 [2024-11-19 11:21:32.582249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.250 [2024-11-19 11:21:32.582256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.250 [2024-11-19 11:21:32.582263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.250 [2024-11-19 11:21:32.582269] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.250 [2024-11-19 11:21:32.582274] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.250 [2024-11-19 11:21:32.582278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:24.250 [2024-11-19 11:21:32.591901] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.250 [2024-11-19 11:21:32.591914] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.250 [2024-11-19 11:21:32.591920] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.250 [2024-11-19 11:21:32.591924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.250 [2024-11-19 11:21:32.591938] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.250 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.250 [2024-11-19 11:21:32.593432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.250 [2024-11-19 11:21:32.593456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.250 [2024-11-19 11:21:32.593465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.250 [2024-11-19 11:21:32.593481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.250 [2024-11-19 11:21:32.593511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.250 [2024-11-19 11:21:32.593519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.250 [2024-11-19 11:21:32.593527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.250 [2024-11-19 11:21:32.593537] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.250 [2024-11-19 11:21:32.593542] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.250 [2024-11-19 11:21:32.593546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.513 [2024-11-19 11:21:32.601971] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.513 [2024-11-19 11:21:32.601986] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.513 [2024-11-19 11:21:32.601991] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.513 [2024-11-19 11:21:32.601996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.513 [2024-11-19 11:21:32.602012] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.513 [2024-11-19 11:21:32.602327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.513 [2024-11-19 11:21:32.602340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.513 [2024-11-19 11:21:32.602348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.513 [2024-11-19 11:21:32.602359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.513 [2024-11-19 11:21:32.602376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.513 [2024-11-19 11:21:32.602383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.513 [2024-11-19 11:21:32.602391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.513 [2024-11-19 11:21:32.602397] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.513 [2024-11-19 11:21:32.602402] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.513 [2024-11-19 11:21:32.602406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.513 [2024-11-19 11:21:32.612043] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.513 [2024-11-19 11:21:32.612056] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.513 [2024-11-19 11:21:32.612060] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.513 [2024-11-19 11:21:32.612065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.513 [2024-11-19 11:21:32.612079] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.513 [2024-11-19 11:21:32.612423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.513 [2024-11-19 11:21:32.612435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.513 [2024-11-19 11:21:32.612443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.513 [2024-11-19 11:21:32.612454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.513 [2024-11-19 11:21:32.612470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.513 [2024-11-19 11:21:32.612477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.513 [2024-11-19 11:21:32.612488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.513 [2024-11-19 11:21:32.612494] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.513 [2024-11-19 11:21:32.612499] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.513 [2024-11-19 11:21:32.612503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.513 [2024-11-19 11:21:32.622111] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.513 [2024-11-19 11:21:32.622124] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.513 [2024-11-19 11:21:32.622129] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.513 [2024-11-19 11:21:32.622133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.513 [2024-11-19 11:21:32.622149] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.513 [2024-11-19 11:21:32.622506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.513 [2024-11-19 11:21:32.622519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.513 [2024-11-19 11:21:32.622526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.513 [2024-11-19 11:21:32.622537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.513 [2024-11-19 11:21:32.622554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.513 [2024-11-19 11:21:32.622561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.513 [2024-11-19 11:21:32.622568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.513 [2024-11-19 11:21:32.622574] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.513 [2024-11-19 11:21:32.622579] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-19 11:21:32.622584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 [2024-11-19 11:21:32.632181] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.514 [2024-11-19 11:21:32.632192] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.514 [2024-11-19 11:21:32.632197] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.514 [2024-11-19 11:21:32.632201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.514 [2024-11-19 11:21:32.632216] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.514 [2024-11-19 11:21:32.632418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.514 [2024-11-19 11:21:32.632430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.514 [2024-11-19 11:21:32.632437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.514 [2024-11-19 11:21:32.632448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.514 [2024-11-19 11:21:32.632459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.514 [2024-11-19 11:21:32.632466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.514 [2024-11-19 11:21:32.632477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.514 [2024-11-19 11:21:32.632483] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.514 [2024-11-19 11:21:32.632487] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-19 11:21:32.632492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:24.514 [2024-11-19 11:21:32.642248] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.514 [2024-11-19 11:21:32.642261] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.514 [2024-11-19 11:21:32.642265] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.514 [2024-11-19 11:21:32.642270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.514 [2024-11-19 11:21:32.642285] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.514 [2024-11-19 11:21:32.642619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.514 [2024-11-19 11:21:32.642631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.514 [2024-11-19 11:21:32.642638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.514 [2024-11-19 11:21:32.642649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.514 [2024-11-19 11:21:32.642666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.514 [2024-11-19 11:21:32.642672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.514 [2024-11-19 11:21:32.642680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.514 [2024-11-19 11:21:32.642686] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.514 [2024-11-19 11:21:32.642691] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-19 11:21:32.642695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:24.514 [2024-11-19 11:21:32.652316] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.514 [2024-11-19 11:21:32.652331] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.514 [2024-11-19 11:21:32.652336] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.514 [2024-11-19 11:21:32.652340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.514 [2024-11-19 11:21:32.652355] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.514 [2024-11-19 11:21:32.652543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.514 [2024-11-19 11:21:32.652555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2236d90 with addr=10.0.0.2, port=4420 00:27:24.514 [2024-11-19 11:21:32.652562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2236d90 is same with the state(6) to be set 00:27:24.514 [2024-11-19 11:21:32.652573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236d90 (9): Bad file descriptor 00:27:24.514 [2024-11-19 11:21:32.652585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.514 [2024-11-19 11:21:32.652591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.514 [2024-11-19 11:21:32.652600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.514 [2024-11-19 11:21:32.652607] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.514 [2024-11-19 11:21:32.652614] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.514 [2024-11-19 11:21:32.652618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.514 [2024-11-19 11:21:32.655820] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:24.514 [2024-11-19 11:21:32.655839] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:27:24.514 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.456 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.457 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.457 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:25.457 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:25.457 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:25.457 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.457 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:25.457 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.457 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.719 11:21:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.108 [2024-11-19 11:21:35.021017] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:27.108 [2024-11-19 11:21:35.021036] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:27.108 [2024-11-19 11:21:35.021049] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:27.108 [2024-11-19 11:21:35.107313] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:27.108 [2024-11-19 11:21:35.172034] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:27.108 [2024-11-19 11:21:35.172766] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2265270:1 started. 00:27:27.108 [2024-11-19 11:21:35.174573] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:27.108 [2024-11-19 11:21:35.174601] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.108 request: 00:27:27.108 { 00:27:27.108 "name": "nvme", 00:27:27.108 "trtype": "tcp", 00:27:27.108 "traddr": "10.0.0.2", 00:27:27.108 "adrfam": "ipv4", 00:27:27.108 "trsvcid": "8009", 00:27:27.108 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:27.108 "wait_for_attach": true, 00:27:27.108 "method": "bdev_nvme_start_discovery", 00:27:27.108 "req_id": 1 00:27:27.108 } 00:27:27.108 Got JSON-RPC error response 00:27:27.108 response: 00:27:27.108 { 00:27:27.108 "code": -17, 00:27:27.108 "message": "File exists" 00:27:27.108 } 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.108 [2024-11-19 11:21:35.219662] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2265270 was disconnected and freed. delete nvme_qpair. 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.108 request: 00:27:27.108 { 00:27:27.108 "name": "nvme_second", 00:27:27.108 "trtype": "tcp", 00:27:27.108 "traddr": "10.0.0.2", 00:27:27.108 "adrfam": "ipv4", 00:27:27.108 "trsvcid": "8009", 00:27:27.108 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:27.108 "wait_for_attach": true, 00:27:27.108 "method": "bdev_nvme_start_discovery", 00:27:27.108 "req_id": 1 00:27:27.108 } 00:27:27.108 Got JSON-RPC error response 00:27:27.108 response: 00:27:27.108 { 00:27:27.108 "code": -17, 00:27:27.108 "message": "File exists" 00:27:27.108 } 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:27.108 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.109 11:21:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.496 [2024-11-19 11:21:36.430069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.496 [2024-11-19 11:21:36.430098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226a8c0 with addr=10.0.0.2, port=8010 00:27:28.496 [2024-11-19 11:21:36.430111] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:28.496 [2024-11-19 11:21:36.430118] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:28.496 [2024-11-19 11:21:36.430126] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:29.439 [2024-11-19 11:21:37.432368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.439 [2024-11-19 11:21:37.432393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226a8c0 with addr=10.0.0.2, port=8010 00:27:29.439 [2024-11-19 11:21:37.432404] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:29.439 [2024-11-19 11:21:37.432410] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:29.439 [2024-11-19 11:21:37.432417] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:30.380 [2024-11-19 11:21:38.434393] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:30.380 request: 00:27:30.380 { 00:27:30.380 "name": "nvme_second", 00:27:30.380 "trtype": "tcp", 00:27:30.380 "traddr": "10.0.0.2", 00:27:30.380 "adrfam": "ipv4", 00:27:30.380 "trsvcid": "8010", 00:27:30.380 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:30.380 "wait_for_attach": false, 00:27:30.380 "attach_timeout_ms": 3000, 00:27:30.380 "method": "bdev_nvme_start_discovery", 00:27:30.380 "req_id": 1 00:27:30.380 } 00:27:30.380 Got JSON-RPC error response 00:27:30.381 response: 00:27:30.381 { 00:27:30.381 "code": -110, 00:27:30.381 "message": "Connection timed out" 00:27:30.381 } 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 73917 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:30.381 rmmod nvme_tcp 00:27:30.381 rmmod nvme_fabrics 00:27:30.381 rmmod nvme_keyring 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 73891 ']' 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 73891 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 73891 ']' 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 73891 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73891 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73891' 00:27:30.381 killing process with pid 73891 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 73891 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 73891 00:27:30.381 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.642 11:21:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.558 11:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:32.558 00:27:32.558 real 0m21.557s 00:27:32.558 user 0m24.915s 00:27:32.558 sys 0m7.865s 00:27:32.558 11:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:32.558 11:21:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.558 ************************************ 00:27:32.559 END TEST nvmf_host_discovery 00:27:32.559 ************************************ 00:27:32.559 11:21:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:32.559 11:21:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:32.559 11:21:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.559 11:21:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.559 ************************************ 00:27:32.559 START TEST nvmf_host_multipath_status 00:27:32.559 ************************************ 00:27:32.559 11:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:32.821 * Looking for test storage... 00:27:32.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:32.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.821 --rc genhtml_branch_coverage=1 00:27:32.821 --rc genhtml_function_coverage=1 00:27:32.821 --rc genhtml_legend=1 00:27:32.821 --rc geninfo_all_blocks=1 00:27:32.821 --rc geninfo_unexecuted_blocks=1 00:27:32.821 00:27:32.821 ' 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:32.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.821 --rc genhtml_branch_coverage=1 00:27:32.821 --rc genhtml_function_coverage=1 00:27:32.821 --rc genhtml_legend=1 00:27:32.821 --rc geninfo_all_blocks=1 00:27:32.821 --rc geninfo_unexecuted_blocks=1 00:27:32.821 00:27:32.821 ' 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:32.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.821 --rc genhtml_branch_coverage=1 00:27:32.821 --rc genhtml_function_coverage=1 00:27:32.821 --rc genhtml_legend=1 00:27:32.821 --rc geninfo_all_blocks=1 00:27:32.821 --rc geninfo_unexecuted_blocks=1 00:27:32.821 00:27:32.821 ' 00:27:32.821 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:32.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.821 --rc genhtml_branch_coverage=1 00:27:32.822 --rc genhtml_function_coverage=1 00:27:32.822 --rc genhtml_legend=1 00:27:32.822 --rc geninfo_all_blocks=1 00:27:32.822 --rc geninfo_unexecuted_blocks=1 00:27:32.822 00:27:32.822 ' 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:32.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.822 11:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:41.118 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:41.118 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:41.118 Found net devices under 0000:31:00.0: cvl_0_0 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:41.118 Found net devices under 0000:31:00.1: cvl_0_1 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.118 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.119 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:27:41.381 00:27:41.381 --- 10.0.0.2 ping statistics --- 00:27:41.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.381 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:27:41.381 00:27:41.381 --- 10.0.0.1 ping statistics --- 00:27:41.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.381 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=80788 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 80788 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 80788 ']' 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.381 11:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:41.381 [2024-11-19 11:21:49.696004] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:27:41.381 [2024-11-19 11:21:49.696072] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.642 [2024-11-19 11:21:49.788346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:41.642 [2024-11-19 11:21:49.830000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.642 [2024-11-19 11:21:49.830035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.642 [2024-11-19 11:21:49.830043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.642 [2024-11-19 11:21:49.830050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.642 [2024-11-19 11:21:49.830056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.642 [2024-11-19 11:21:49.831341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.642 [2024-11-19 11:21:49.831343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.214 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.214 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:42.214 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:42.214 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.214 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:42.214 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.214 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=80788 00:27:42.214 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:42.474 [2024-11-19 11:21:50.687043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.474 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:42.735 Malloc0 00:27:42.735 11:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:42.735 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.996 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.258 [2024-11-19 11:21:51.355145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:43.258 [2024-11-19 11:21:51.511488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=81165 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 81165 /var/tmp/bdevperf.sock 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 81165 ']' 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:43.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.258 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:43.519 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.519 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:43.519 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:43.780 11:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:44.040 Nvme0n1 00:27:44.040 11:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:44.611 Nvme0n1 00:27:44.611 11:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:44.611 11:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:46.524 11:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:46.524 11:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:46.784 11:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:46.784 11:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.167 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:48.427 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.427 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:48.427 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.427 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:48.686 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.686 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:48.686 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.686 11:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:48.947 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.947 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:48.947 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.947 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:48.947 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.947 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:48.947 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:49.207 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:49.468 11:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:50.410 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:50.410 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:50.410 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.410 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:50.410 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.410 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:50.410 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.410 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:50.671 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.671 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:50.671 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.671 11:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:50.932 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.932 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:50.932 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.932 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:51.193 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.193 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:51.193 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.193 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:51.193 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.193 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:51.193 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.193 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:51.454 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.454 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:51.454 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:51.715 11:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:51.715 11:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:53.102 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.363 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.363 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:53.363 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.363 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:53.623 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.623 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:53.623 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.623 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:53.884 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.884 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:53.884 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.884 11:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:53.884 11:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.884 11:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:53.884 11:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:54.144 11:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:54.405 11:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:55.347 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:55.347 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:55.347 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.347 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:55.347 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.347 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:55.608 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.608 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:55.608 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:55.609 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:55.609 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.609 11:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:55.870 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.870 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:55.870 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.870 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:56.131 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.131 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:56.131 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.131 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:56.131 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.131 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:56.131 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.131 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:56.391 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:56.391 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:56.391 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:56.651 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:56.651 11:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:58.038 11:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:58.038 11:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:58.038 11:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.038 11:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:58.038 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.038 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:58.038 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.038 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:58.038 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.038 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:58.038 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.038 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:58.299 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.299 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:58.299 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:58.299 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.561 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.561 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:58.561 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.561 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:58.561 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.561 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:58.561 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.561 11:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:58.822 11:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.822 11:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:58.822 11:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:59.084 11:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:59.084 11:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.469 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:00.731 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.731 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:00.731 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.731 11:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:00.992 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.992 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:00.992 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.992 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:01.254 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:01.254 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:01.254 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.254 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:01.254 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.254 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:01.515 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:01.515 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:01.515 11:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:01.776 11:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:02.718 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:02.718 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:02.718 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.718 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:02.980 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.980 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:02.980 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.980 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:03.240 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.240 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:03.240 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.240 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:03.502 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.502 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:03.502 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.502 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:03.502 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.502 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:03.502 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.502 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:03.764 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.764 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:03.764 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:03.764 11:22:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.025 11:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.025 11:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:04.025 11:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:04.025 11:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:04.286 11:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:05.229 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:05.229 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:05.229 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.229 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:05.490 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:05.490 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:05.490 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.490 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:05.751 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.751 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:05.751 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.751 11:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:05.751 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.751 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:06.012 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.012 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:06.012 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.012 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:06.012 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.012 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:06.274 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.274 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:06.274 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.274 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:06.535 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.535 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:06.535 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:06.535 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:06.796 11:22:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:07.739 11:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:07.739 11:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:07.739 11:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.739 11:22:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:08.000 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.000 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:08.000 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.001 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:08.001 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.001 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:08.261 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:08.261 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.261 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.262 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:08.262 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:08.262 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.522 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.522 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:08.522 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.522 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:08.522 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.522 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:08.522 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.522 11:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:08.784 11:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.784 11:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:08.784 11:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:09.044 11:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:09.305 11:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:10.246 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:10.246 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:10.246 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.246 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:10.506 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.506 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:10.506 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.506 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:10.506 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:10.506 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:10.506 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.506 11:22:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:10.767 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.767 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:10.767 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.767 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:11.026 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.026 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:11.026 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.026 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 81165 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 81165 ']' 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 81165 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.287 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81165 00:28:11.552 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:11.552 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:11.552 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81165' 00:28:11.552 killing process with pid 81165 00:28:11.552 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 81165 00:28:11.552 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 81165 00:28:11.552 { 00:28:11.552 "results": [ 00:28:11.552 { 00:28:11.552 "job": "Nvme0n1", 00:28:11.552 "core_mask": "0x4", 00:28:11.552 "workload": "verify", 00:28:11.552 "status": "terminated", 00:28:11.552 "verify_range": { 00:28:11.552 "start": 0, 00:28:11.552 "length": 16384 00:28:11.552 }, 00:28:11.552 "queue_depth": 128, 00:28:11.552 "io_size": 4096, 00:28:11.552 "runtime": 26.767395, 00:28:11.552 "iops": 10796.194399940674, 00:28:11.552 "mibps": 42.17263437476826, 00:28:11.552 "io_failed": 0, 00:28:11.552 "io_timeout": 0, 00:28:11.552 "avg_latency_us": 11838.060486067374, 00:28:11.552 "min_latency_us": 295.25333333333333, 00:28:11.552 "max_latency_us": 3019898.88 00:28:11.552 } 00:28:11.552 ], 00:28:11.552 "core_count": 1 00:28:11.552 } 00:28:11.552 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 81165 00:28:11.552 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:11.552 [2024-11-19 11:21:51.560310] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:28:11.552 [2024-11-19 11:21:51.560368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81165 ] 00:28:11.552 [2024-11-19 11:21:51.625386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.552 [2024-11-19 11:21:51.654171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.552 Running I/O for 90 seconds... 00:28:11.552 9512.00 IOPS, 37.16 MiB/s [2024-11-19T10:22:19.904Z] 9602.00 IOPS, 37.51 MiB/s [2024-11-19T10:22:19.904Z] 9583.00 IOPS, 37.43 MiB/s [2024-11-19T10:22:19.904Z] 9601.75 IOPS, 37.51 MiB/s [2024-11-19T10:22:19.904Z] 9858.20 IOPS, 38.51 MiB/s [2024-11-19T10:22:19.904Z] 10362.17 IOPS, 40.48 MiB/s [2024-11-19T10:22:19.904Z] 10737.71 IOPS, 41.94 MiB/s [2024-11-19T10:22:19.904Z] 10709.12 IOPS, 41.83 MiB/s [2024-11-19T10:22:19.904Z] 10596.67 IOPS, 41.39 MiB/s [2024-11-19T10:22:19.904Z] 10502.50 IOPS, 41.03 MiB/s [2024-11-19T10:22:19.904Z] 10420.00 IOPS, 40.70 MiB/s [2024-11-19T10:22:19.904Z] [2024-11-19 11:22:04.744591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.552 [2024-11-19 11:22:04.744625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:11.552 [2024-11-19 11:22:04.744659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.552 [2024-11-19 11:22:04.744666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.552 [2024-11-19 11:22:04.744677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-11-19 11:22:04.744775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-11-19 11:22:04.744796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-11-19 11:22:04.744811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-11-19 11:22:04.744827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-11-19 11:22:04.744843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-11-19 11:22:04.744858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-11-19 11:22:04.744878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.744986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.744999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.745256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.745277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.745293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.745310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.745328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.745344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.745361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.745378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.745384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.746264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.746272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.746285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.553 [2024-11-19 11:22:04.746291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:11.553 [2024-11-19 11:22:04.746303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:11.554 [2024-11-19 11:22:04.746919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.554 [2024-11-19 11:22:04.746924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.746936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.746942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.746954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.746959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.746971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.746976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.746989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.746994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-11-19 11:22:04.747354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.555 [2024-11-19 11:22:04.747560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:11.555 [2024-11-19 11:22:04.747576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.747984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.747999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.556 [2024-11-19 11:22:04.748004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.748020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.556 [2024-11-19 11:22:04.748026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.748040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.556 [2024-11-19 11:22:04.748046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.748060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.556 [2024-11-19 11:22:04.748066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.748081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.556 [2024-11-19 11:22:04.748086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.748101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.556 [2024-11-19 11:22:04.748105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.748120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.556 [2024-11-19 11:22:04.748126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.748140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.556 [2024-11-19 11:22:04.748146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:04.748160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:04.748165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:11.556 10272.58 IOPS, 40.13 MiB/s [2024-11-19T10:22:19.908Z] 9482.38 IOPS, 37.04 MiB/s [2024-11-19T10:22:19.908Z] 8805.07 IOPS, 34.39 MiB/s [2024-11-19T10:22:19.908Z] 8307.87 IOPS, 32.45 MiB/s [2024-11-19T10:22:19.908Z] 8598.25 IOPS, 33.59 MiB/s [2024-11-19T10:22:19.908Z] 8859.76 IOPS, 34.61 MiB/s [2024-11-19T10:22:19.908Z] 9296.72 IOPS, 36.32 MiB/s [2024-11-19T10:22:19.908Z] 9696.37 IOPS, 37.88 MiB/s [2024-11-19T10:22:19.908Z] 9961.55 IOPS, 38.91 MiB/s [2024-11-19T10:22:19.908Z] 10098.67 IOPS, 39.45 MiB/s [2024-11-19T10:22:19.908Z] 10244.00 IOPS, 40.02 MiB/s [2024-11-19T10:22:19.908Z] 10506.87 IOPS, 41.04 MiB/s [2024-11-19T10:22:19.908Z] 10772.50 IOPS, 42.08 MiB/s [2024-11-19T10:22:19.908Z] [2024-11-19 11:22:17.403350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:17.403386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:17.403417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:17.403424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:17.403435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.556 [2024-11-19 11:22:17.403445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:11.556 [2024-11-19 11:22:17.403456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.557 [2024-11-19 11:22:17.403461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.557 [2024-11-19 11:22:17.403476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.557 [2024-11-19 11:22:17.403492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.557 [2024-11-19 11:22:17.403507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.557 [2024-11-19 11:22:17.403523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.557 [2024-11-19 11:22:17.403622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.557 [2024-11-19 11:22:17.403639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.557 [2024-11-19 11:22:17.403655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.557 [2024-11-19 11:22:17.403670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.403680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.557 [2024-11-19 11:22:17.403686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.404258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.557 [2024-11-19 11:22:17.404270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.404282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.557 [2024-11-19 11:22:17.404287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:11.557 [2024-11-19 11:22:17.404301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.557 [2024-11-19 11:22:17.404306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:11.557 10881.80 IOPS, 42.51 MiB/s [2024-11-19T10:22:19.909Z] 10833.19 IOPS, 42.32 MiB/s [2024-11-19T10:22:19.909Z] Received shutdown signal, test time was about 26.768005 seconds 00:28:11.557 00:28:11.557 Latency(us) 00:28:11.557 [2024-11-19T10:22:19.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.557 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:11.557 Verification LBA range: start 0x0 length 0x4000 00:28:11.557 Nvme0n1 : 26.77 10796.19 42.17 0.00 0.00 11838.06 295.25 3019898.88 00:28:11.557 [2024-11-19T10:22:19.909Z] =================================================================================================================== 00:28:11.557 [2024-11-19T10:22:19.909Z] Total : 10796.19 42.17 0.00 0.00 11838.06 295.25 3019898.88 00:28:11.557 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.819 rmmod nvme_tcp 00:28:11.819 rmmod nvme_fabrics 00:28:11.819 rmmod nvme_keyring 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 80788 ']' 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 80788 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 80788 ']' 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 80788 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.819 11:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80788 00:28:11.819 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:11.819 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:11.819 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80788' 00:28:11.819 killing process with pid 80788 00:28:11.819 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 80788 00:28:11.819 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 80788 00:28:12.079 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.079 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.079 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.080 11:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.992 11:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.992 00:28:13.992 real 0m41.375s 00:28:13.992 user 1m44.092s 00:28:13.992 sys 0m12.346s 00:28:13.992 11:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.992 11:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:13.992 ************************************ 00:28:13.992 END TEST nvmf_host_multipath_status 00:28:13.992 ************************************ 00:28:13.992 11:22:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:13.992 11:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:13.992 11:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.992 11:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.254 ************************************ 00:28:14.254 START TEST nvmf_discovery_remove_ifc 00:28:14.254 ************************************ 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:14.254 * Looking for test storage... 00:28:14.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:14.254 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:14.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.255 --rc genhtml_branch_coverage=1 00:28:14.255 --rc genhtml_function_coverage=1 00:28:14.255 --rc genhtml_legend=1 00:28:14.255 --rc geninfo_all_blocks=1 00:28:14.255 --rc geninfo_unexecuted_blocks=1 00:28:14.255 00:28:14.255 ' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:14.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.255 --rc genhtml_branch_coverage=1 00:28:14.255 --rc genhtml_function_coverage=1 00:28:14.255 --rc genhtml_legend=1 00:28:14.255 --rc geninfo_all_blocks=1 00:28:14.255 --rc geninfo_unexecuted_blocks=1 00:28:14.255 00:28:14.255 ' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:14.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.255 --rc genhtml_branch_coverage=1 00:28:14.255 --rc genhtml_function_coverage=1 00:28:14.255 --rc genhtml_legend=1 00:28:14.255 --rc geninfo_all_blocks=1 00:28:14.255 --rc geninfo_unexecuted_blocks=1 00:28:14.255 00:28:14.255 ' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:14.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.255 --rc genhtml_branch_coverage=1 00:28:14.255 --rc genhtml_function_coverage=1 00:28:14.255 --rc genhtml_legend=1 00:28:14.255 --rc geninfo_all_blocks=1 00:28:14.255 --rc geninfo_unexecuted_blocks=1 00:28:14.255 00:28:14.255 ' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:14.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.255 11:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.396 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:22.397 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:22.397 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:22.397 Found net devices under 0000:31:00.0: cvl_0_0 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:22.397 Found net devices under 0000:31:00.1: cvl_0_1 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.397 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:28:22.659 00:28:22.659 --- 10.0.0.2 ping statistics --- 00:28:22.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.659 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:28:22.659 00:28:22.659 --- 10.0.0.1 ping statistics --- 00:28:22.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.659 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91609 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91609 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91609 ']' 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.659 11:22:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.921 [2024-11-19 11:22:31.052362] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:28:22.921 [2024-11-19 11:22:31.052428] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.921 [2024-11-19 11:22:31.159509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.921 [2024-11-19 11:22:31.210531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.921 [2024-11-19 11:22:31.210587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.922 [2024-11-19 11:22:31.210595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.922 [2024-11-19 11:22:31.210602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.922 [2024-11-19 11:22:31.210609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.922 [2024-11-19 11:22:31.211429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.895 11:22:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.895 [2024-11-19 11:22:31.956325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.895 [2024-11-19 11:22:31.964483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:23.895 null0 00:28:23.895 [2024-11-19 11:22:31.996485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91738 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91738 /tmp/host.sock 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91738 ']' 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:23.895 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.895 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:23.896 [2024-11-19 11:22:32.052921] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:28:23.896 [2024-11-19 11:22:32.052971] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91738 ] 00:28:23.896 [2024-11-19 11:22:32.129713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.896 [2024-11-19 11:22:32.165853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.522 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.522 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:24.522 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:24.522 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:24.522 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.522 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.523 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.523 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:24.523 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.523 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:24.784 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.784 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:24.784 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.784 11:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:25.725 [2024-11-19 11:22:33.985039] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:25.725 [2024-11-19 11:22:33.985059] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:25.725 [2024-11-19 11:22:33.985073] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:25.725 [2024-11-19 11:22:34.073361] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:25.985 [2024-11-19 11:22:34.134086] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:25.985 [2024-11-19 11:22:34.135050] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23ce670:1 started. 00:28:25.985 [2024-11-19 11:22:34.136599] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:25.985 [2024-11-19 11:22:34.136639] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:25.985 [2024-11-19 11:22:34.136660] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:25.985 [2024-11-19 11:22:34.136674] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:25.985 [2024-11-19 11:22:34.136695] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:25.985 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.985 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:25.985 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:25.986 [2024-11-19 11:22:34.144073] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23ce670 was disconnected and freed. delete nvme_qpair. 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:25.986 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:26.245 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.245 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:26.245 11:22:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:27.186 11:22:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:28.128 11:22:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:29.511 11:22:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:30.452 11:22:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:31.403 [2024-11-19 11:22:39.577431] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:31.403 [2024-11-19 11:22:39.577471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.403 [2024-11-19 11:22:39.577484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.403 [2024-11-19 11:22:39.577493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.403 [2024-11-19 11:22:39.577501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.403 [2024-11-19 11:22:39.577509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.403 [2024-11-19 11:22:39.577516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.403 [2024-11-19 11:22:39.577524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.403 [2024-11-19 11:22:39.577532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.403 [2024-11-19 11:22:39.577541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:31.403 [2024-11-19 11:22:39.577548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.403 [2024-11-19 11:22:39.577556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab050 is same with the state(6) to be set 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:31.403 [2024-11-19 11:22:39.587455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ab050 (9): Bad file descriptor 00:28:31.403 [2024-11-19 11:22:39.597489] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:31.403 [2024-11-19 11:22:39.597503] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:31.403 [2024-11-19 11:22:39.597508] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:31.403 [2024-11-19 11:22:39.597513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:31.403 [2024-11-19 11:22:39.597532] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:31.403 11:22:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:32.347 [2024-11-19 11:22:40.603893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:32.347 [2024-11-19 11:22:40.603950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ab050 with addr=10.0.0.2, port=4420 00:28:32.347 [2024-11-19 11:22:40.603964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ab050 is same with the state(6) to be set 00:28:32.347 [2024-11-19 11:22:40.603995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ab050 (9): Bad file descriptor 00:28:32.347 [2024-11-19 11:22:40.604049] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:32.347 [2024-11-19 11:22:40.604072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:32.347 [2024-11-19 11:22:40.604080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:32.347 [2024-11-19 11:22:40.604089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:32.347 [2024-11-19 11:22:40.604098] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:32.347 [2024-11-19 11:22:40.604105] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:32.347 [2024-11-19 11:22:40.604110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:32.347 [2024-11-19 11:22:40.604118] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:32.347 [2024-11-19 11:22:40.604124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:32.347 11:22:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:33.290 [2024-11-19 11:22:41.606497] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:33.290 [2024-11-19 11:22:41.606519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:33.290 [2024-11-19 11:22:41.606531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:33.290 [2024-11-19 11:22:41.606539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:33.290 [2024-11-19 11:22:41.606548] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:33.290 [2024-11-19 11:22:41.606556] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:33.290 [2024-11-19 11:22:41.606561] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:33.290 [2024-11-19 11:22:41.606566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:33.290 [2024-11-19 11:22:41.606594] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:33.290 [2024-11-19 11:22:41.606617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.290 [2024-11-19 11:22:41.606628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.290 [2024-11-19 11:22:41.606638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.290 [2024-11-19 11:22:41.606646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.290 [2024-11-19 11:22:41.606654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.290 [2024-11-19 11:22:41.606661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.290 [2024-11-19 11:22:41.606669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.290 [2024-11-19 11:22:41.606676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.290 [2024-11-19 11:22:41.606684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.290 [2024-11-19 11:22:41.606692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.290 [2024-11-19 11:22:41.606699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:33.290 [2024-11-19 11:22:41.606725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239a380 (9): Bad file descriptor 00:28:33.290 [2024-11-19 11:22:41.607723] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:33.290 [2024-11-19 11:22:41.607735] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:33.551 11:22:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:34.937 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:34.937 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:34.937 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.937 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:34.937 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.937 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.938 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:34.938 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.938 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:34.938 11:22:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:35.508 [2024-11-19 11:22:43.660017] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:35.508 [2024-11-19 11:22:43.660033] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:35.508 [2024-11-19 11:22:43.660046] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:35.508 [2024-11-19 11:22:43.746304] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:35.769 11:22:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.769 11:22:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.769 11:22:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.769 11:22:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.769 11:22:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.769 11:22:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.769 11:22:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.769 [2024-11-19 11:22:43.967560] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:35.769 [2024-11-19 11:22:43.968451] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x23b5940:1 started. 00:28:35.769 [2024-11-19 11:22:43.969673] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:35.769 [2024-11-19 11:22:43.969707] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:35.769 [2024-11-19 11:22:43.969726] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:35.769 [2024-11-19 11:22:43.969740] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:35.769 [2024-11-19 11:22:43.969749] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:35.769 11:22:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.769 [2024-11-19 11:22:43.978283] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x23b5940 was disconnected and freed. delete nvme_qpair. 00:28:35.769 11:22:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:35.769 11:22:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91738 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91738 ']' 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91738 00:28:36.713 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91738 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91738' 00:28:36.974 killing process with pid 91738 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91738 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91738 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.974 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.975 rmmod nvme_tcp 00:28:36.975 rmmod nvme_fabrics 00:28:36.975 rmmod nvme_keyring 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91609 ']' 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91609 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91609 ']' 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91609 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.975 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91609 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91609' 00:28:37.236 killing process with pid 91609 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91609 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91609 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.236 11:22:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.783 00:28:39.783 real 0m25.209s 00:28:39.783 user 0m29.568s 00:28:39.783 sys 0m7.793s 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:39.783 ************************************ 00:28:39.783 END TEST nvmf_discovery_remove_ifc 00:28:39.783 ************************************ 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.783 ************************************ 00:28:39.783 START TEST nvmf_identify_kernel_target 00:28:39.783 ************************************ 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:39.783 * Looking for test storage... 00:28:39.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:39.783 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:39.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.784 --rc genhtml_branch_coverage=1 00:28:39.784 --rc genhtml_function_coverage=1 00:28:39.784 --rc genhtml_legend=1 00:28:39.784 --rc geninfo_all_blocks=1 00:28:39.784 --rc geninfo_unexecuted_blocks=1 00:28:39.784 00:28:39.784 ' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:39.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.784 --rc genhtml_branch_coverage=1 00:28:39.784 --rc genhtml_function_coverage=1 00:28:39.784 --rc genhtml_legend=1 00:28:39.784 --rc geninfo_all_blocks=1 00:28:39.784 --rc geninfo_unexecuted_blocks=1 00:28:39.784 00:28:39.784 ' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:39.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.784 --rc genhtml_branch_coverage=1 00:28:39.784 --rc genhtml_function_coverage=1 00:28:39.784 --rc genhtml_legend=1 00:28:39.784 --rc geninfo_all_blocks=1 00:28:39.784 --rc geninfo_unexecuted_blocks=1 00:28:39.784 00:28:39.784 ' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:39.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.784 --rc genhtml_branch_coverage=1 00:28:39.784 --rc genhtml_function_coverage=1 00:28:39.784 --rc genhtml_legend=1 00:28:39.784 --rc geninfo_all_blocks=1 00:28:39.784 --rc geninfo_unexecuted_blocks=1 00:28:39.784 00:28:39.784 ' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:39.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.784 11:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:47.931 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:47.931 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:47.931 Found net devices under 0000:31:00.0: cvl_0_0 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:47.931 Found net devices under 0000:31:00.1: cvl_0_1 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.931 11:22:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.931 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:47.931 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.931 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.931 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.931 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:47.931 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:47.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:28:47.931 00:28:47.931 --- 10.0.0.2 ping statistics --- 00:28:47.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.932 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:28:47.932 00:28:47.932 --- 10.0.0.1 ping statistics --- 00:28:47.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.932 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:47.932 11:22:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:52.163 Waiting for block devices as requested 00:28:52.163 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:52.163 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:52.163 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:52.163 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:52.163 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:52.163 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:52.163 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:52.163 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:52.163 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:52.424 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:52.424 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:52.424 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:52.424 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:52.684 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:52.684 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:52.684 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:52.684 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:53.256 No valid GPT data, bailing 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:53.256 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:53.256 00:28:53.256 Discovery Log Number of Records 2, Generation counter 2 00:28:53.256 =====Discovery Log Entry 0====== 00:28:53.256 trtype: tcp 00:28:53.256 adrfam: ipv4 00:28:53.256 subtype: current discovery subsystem 00:28:53.256 treq: not specified, sq flow control disable supported 00:28:53.257 portid: 1 00:28:53.257 trsvcid: 4420 00:28:53.257 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:53.257 traddr: 10.0.0.1 00:28:53.257 eflags: none 00:28:53.257 sectype: none 00:28:53.257 =====Discovery Log Entry 1====== 00:28:53.257 trtype: tcp 00:28:53.257 adrfam: ipv4 00:28:53.257 subtype: nvme subsystem 00:28:53.257 treq: not specified, sq flow control disable supported 00:28:53.257 portid: 1 00:28:53.257 trsvcid: 4420 00:28:53.257 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:53.257 traddr: 10.0.0.1 00:28:53.257 eflags: none 00:28:53.257 sectype: none 00:28:53.257 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:53.257 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:53.519 ===================================================== 00:28:53.519 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:53.519 ===================================================== 00:28:53.519 Controller Capabilities/Features 00:28:53.519 ================================ 00:28:53.519 Vendor ID: 0000 00:28:53.519 Subsystem Vendor ID: 0000 00:28:53.519 Serial Number: 6937f634ceb2001bf235 00:28:53.519 Model Number: Linux 00:28:53.519 Firmware Version: 6.8.9-20 00:28:53.519 Recommended Arb Burst: 0 00:28:53.519 IEEE OUI Identifier: 00 00 00 00:28:53.519 Multi-path I/O 00:28:53.519 May have multiple subsystem ports: No 00:28:53.519 May have multiple controllers: No 00:28:53.519 Associated with SR-IOV VF: No 00:28:53.519 Max Data Transfer Size: Unlimited 00:28:53.519 Max Number of Namespaces: 0 00:28:53.519 Max Number of I/O Queues: 1024 00:28:53.519 NVMe Specification Version (VS): 1.3 00:28:53.519 NVMe Specification Version (Identify): 1.3 00:28:53.519 Maximum Queue Entries: 1024 00:28:53.519 Contiguous Queues Required: No 00:28:53.519 Arbitration Mechanisms Supported 00:28:53.519 Weighted Round Robin: Not Supported 00:28:53.519 Vendor Specific: Not Supported 00:28:53.519 Reset Timeout: 7500 ms 00:28:53.519 Doorbell Stride: 4 bytes 00:28:53.519 NVM Subsystem Reset: Not Supported 00:28:53.519 Command Sets Supported 00:28:53.519 NVM Command Set: Supported 00:28:53.519 Boot Partition: Not Supported 00:28:53.519 Memory Page Size Minimum: 4096 bytes 00:28:53.519 Memory Page Size Maximum: 4096 bytes 00:28:53.519 Persistent Memory Region: Not Supported 00:28:53.519 Optional Asynchronous Events Supported 00:28:53.519 Namespace Attribute Notices: Not Supported 00:28:53.519 Firmware Activation Notices: Not Supported 00:28:53.519 ANA Change Notices: Not Supported 00:28:53.519 PLE Aggregate Log Change Notices: Not Supported 00:28:53.519 LBA Status Info Alert Notices: Not Supported 00:28:53.519 EGE Aggregate Log Change Notices: Not Supported 00:28:53.519 Normal NVM Subsystem Shutdown event: Not Supported 00:28:53.519 Zone Descriptor Change Notices: Not Supported 00:28:53.519 Discovery Log Change Notices: Supported 00:28:53.519 Controller Attributes 00:28:53.519 128-bit Host Identifier: Not Supported 00:28:53.519 Non-Operational Permissive Mode: Not Supported 00:28:53.519 NVM Sets: Not Supported 00:28:53.519 Read Recovery Levels: Not Supported 00:28:53.519 Endurance Groups: Not Supported 00:28:53.519 Predictable Latency Mode: Not Supported 00:28:53.519 Traffic Based Keep ALive: Not Supported 00:28:53.519 Namespace Granularity: Not Supported 00:28:53.519 SQ Associations: Not Supported 00:28:53.519 UUID List: Not Supported 00:28:53.519 Multi-Domain Subsystem: Not Supported 00:28:53.519 Fixed Capacity Management: Not Supported 00:28:53.519 Variable Capacity Management: Not Supported 00:28:53.519 Delete Endurance Group: Not Supported 00:28:53.519 Delete NVM Set: Not Supported 00:28:53.519 Extended LBA Formats Supported: Not Supported 00:28:53.519 Flexible Data Placement Supported: Not Supported 00:28:53.519 00:28:53.519 Controller Memory Buffer Support 00:28:53.519 ================================ 00:28:53.519 Supported: No 00:28:53.519 00:28:53.519 Persistent Memory Region Support 00:28:53.519 ================================ 00:28:53.519 Supported: No 00:28:53.519 00:28:53.519 Admin Command Set Attributes 00:28:53.519 ============================ 00:28:53.519 Security Send/Receive: Not Supported 00:28:53.519 Format NVM: Not Supported 00:28:53.519 Firmware Activate/Download: Not Supported 00:28:53.519 Namespace Management: Not Supported 00:28:53.519 Device Self-Test: Not Supported 00:28:53.519 Directives: Not Supported 00:28:53.519 NVMe-MI: Not Supported 00:28:53.519 Virtualization Management: Not Supported 00:28:53.519 Doorbell Buffer Config: Not Supported 00:28:53.519 Get LBA Status Capability: Not Supported 00:28:53.519 Command & Feature Lockdown Capability: Not Supported 00:28:53.519 Abort Command Limit: 1 00:28:53.519 Async Event Request Limit: 1 00:28:53.519 Number of Firmware Slots: N/A 00:28:53.519 Firmware Slot 1 Read-Only: N/A 00:28:53.519 Firmware Activation Without Reset: N/A 00:28:53.519 Multiple Update Detection Support: N/A 00:28:53.519 Firmware Update Granularity: No Information Provided 00:28:53.519 Per-Namespace SMART Log: No 00:28:53.519 Asymmetric Namespace Access Log Page: Not Supported 00:28:53.519 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:53.519 Command Effects Log Page: Not Supported 00:28:53.519 Get Log Page Extended Data: Supported 00:28:53.519 Telemetry Log Pages: Not Supported 00:28:53.519 Persistent Event Log Pages: Not Supported 00:28:53.519 Supported Log Pages Log Page: May Support 00:28:53.519 Commands Supported & Effects Log Page: Not Supported 00:28:53.519 Feature Identifiers & Effects Log Page:May Support 00:28:53.519 NVMe-MI Commands & Effects Log Page: May Support 00:28:53.519 Data Area 4 for Telemetry Log: Not Supported 00:28:53.519 Error Log Page Entries Supported: 1 00:28:53.519 Keep Alive: Not Supported 00:28:53.519 00:28:53.519 NVM Command Set Attributes 00:28:53.519 ========================== 00:28:53.519 Submission Queue Entry Size 00:28:53.519 Max: 1 00:28:53.519 Min: 1 00:28:53.519 Completion Queue Entry Size 00:28:53.519 Max: 1 00:28:53.519 Min: 1 00:28:53.519 Number of Namespaces: 0 00:28:53.519 Compare Command: Not Supported 00:28:53.519 Write Uncorrectable Command: Not Supported 00:28:53.519 Dataset Management Command: Not Supported 00:28:53.519 Write Zeroes Command: Not Supported 00:28:53.519 Set Features Save Field: Not Supported 00:28:53.519 Reservations: Not Supported 00:28:53.519 Timestamp: Not Supported 00:28:53.519 Copy: Not Supported 00:28:53.519 Volatile Write Cache: Not Present 00:28:53.519 Atomic Write Unit (Normal): 1 00:28:53.519 Atomic Write Unit (PFail): 1 00:28:53.519 Atomic Compare & Write Unit: 1 00:28:53.519 Fused Compare & Write: Not Supported 00:28:53.519 Scatter-Gather List 00:28:53.519 SGL Command Set: Supported 00:28:53.519 SGL Keyed: Not Supported 00:28:53.519 SGL Bit Bucket Descriptor: Not Supported 00:28:53.519 SGL Metadata Pointer: Not Supported 00:28:53.519 Oversized SGL: Not Supported 00:28:53.519 SGL Metadata Address: Not Supported 00:28:53.520 SGL Offset: Supported 00:28:53.520 Transport SGL Data Block: Not Supported 00:28:53.520 Replay Protected Memory Block: Not Supported 00:28:53.520 00:28:53.520 Firmware Slot Information 00:28:53.520 ========================= 00:28:53.520 Active slot: 0 00:28:53.520 00:28:53.520 00:28:53.520 Error Log 00:28:53.520 ========= 00:28:53.520 00:28:53.520 Active Namespaces 00:28:53.520 ================= 00:28:53.520 Discovery Log Page 00:28:53.520 ================== 00:28:53.520 Generation Counter: 2 00:28:53.520 Number of Records: 2 00:28:53.520 Record Format: 0 00:28:53.520 00:28:53.520 Discovery Log Entry 0 00:28:53.520 ---------------------- 00:28:53.520 Transport Type: 3 (TCP) 00:28:53.520 Address Family: 1 (IPv4) 00:28:53.520 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:53.520 Entry Flags: 00:28:53.520 Duplicate Returned Information: 0 00:28:53.520 Explicit Persistent Connection Support for Discovery: 0 00:28:53.520 Transport Requirements: 00:28:53.520 Secure Channel: Not Specified 00:28:53.520 Port ID: 1 (0x0001) 00:28:53.520 Controller ID: 65535 (0xffff) 00:28:53.520 Admin Max SQ Size: 32 00:28:53.520 Transport Service Identifier: 4420 00:28:53.520 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:53.520 Transport Address: 10.0.0.1 00:28:53.520 Discovery Log Entry 1 00:28:53.520 ---------------------- 00:28:53.520 Transport Type: 3 (TCP) 00:28:53.520 Address Family: 1 (IPv4) 00:28:53.520 Subsystem Type: 2 (NVM Subsystem) 00:28:53.520 Entry Flags: 00:28:53.520 Duplicate Returned Information: 0 00:28:53.520 Explicit Persistent Connection Support for Discovery: 0 00:28:53.520 Transport Requirements: 00:28:53.520 Secure Channel: Not Specified 00:28:53.520 Port ID: 1 (0x0001) 00:28:53.520 Controller ID: 65535 (0xffff) 00:28:53.520 Admin Max SQ Size: 32 00:28:53.520 Transport Service Identifier: 4420 00:28:53.520 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:53.520 Transport Address: 10.0.0.1 00:28:53.520 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:53.520 get_feature(0x01) failed 00:28:53.520 get_feature(0x02) failed 00:28:53.520 get_feature(0x04) failed 00:28:53.520 ===================================================== 00:28:53.520 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:53.520 ===================================================== 00:28:53.520 Controller Capabilities/Features 00:28:53.520 ================================ 00:28:53.520 Vendor ID: 0000 00:28:53.520 Subsystem Vendor ID: 0000 00:28:53.520 Serial Number: 4987a09b86fdeebed136 00:28:53.520 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:53.520 Firmware Version: 6.8.9-20 00:28:53.520 Recommended Arb Burst: 6 00:28:53.520 IEEE OUI Identifier: 00 00 00 00:28:53.520 Multi-path I/O 00:28:53.520 May have multiple subsystem ports: Yes 00:28:53.520 May have multiple controllers: Yes 00:28:53.520 Associated with SR-IOV VF: No 00:28:53.520 Max Data Transfer Size: Unlimited 00:28:53.520 Max Number of Namespaces: 1024 00:28:53.520 Max Number of I/O Queues: 128 00:28:53.520 NVMe Specification Version (VS): 1.3 00:28:53.520 NVMe Specification Version (Identify): 1.3 00:28:53.520 Maximum Queue Entries: 1024 00:28:53.520 Contiguous Queues Required: No 00:28:53.520 Arbitration Mechanisms Supported 00:28:53.520 Weighted Round Robin: Not Supported 00:28:53.520 Vendor Specific: Not Supported 00:28:53.520 Reset Timeout: 7500 ms 00:28:53.520 Doorbell Stride: 4 bytes 00:28:53.520 NVM Subsystem Reset: Not Supported 00:28:53.520 Command Sets Supported 00:28:53.520 NVM Command Set: Supported 00:28:53.520 Boot Partition: Not Supported 00:28:53.520 Memory Page Size Minimum: 4096 bytes 00:28:53.520 Memory Page Size Maximum: 4096 bytes 00:28:53.520 Persistent Memory Region: Not Supported 00:28:53.520 Optional Asynchronous Events Supported 00:28:53.520 Namespace Attribute Notices: Supported 00:28:53.520 Firmware Activation Notices: Not Supported 00:28:53.520 ANA Change Notices: Supported 00:28:53.520 PLE Aggregate Log Change Notices: Not Supported 00:28:53.520 LBA Status Info Alert Notices: Not Supported 00:28:53.520 EGE Aggregate Log Change Notices: Not Supported 00:28:53.520 Normal NVM Subsystem Shutdown event: Not Supported 00:28:53.520 Zone Descriptor Change Notices: Not Supported 00:28:53.520 Discovery Log Change Notices: Not Supported 00:28:53.520 Controller Attributes 00:28:53.520 128-bit Host Identifier: Supported 00:28:53.520 Non-Operational Permissive Mode: Not Supported 00:28:53.520 NVM Sets: Not Supported 00:28:53.520 Read Recovery Levels: Not Supported 00:28:53.520 Endurance Groups: Not Supported 00:28:53.520 Predictable Latency Mode: Not Supported 00:28:53.520 Traffic Based Keep ALive: Supported 00:28:53.520 Namespace Granularity: Not Supported 00:28:53.520 SQ Associations: Not Supported 00:28:53.520 UUID List: Not Supported 00:28:53.520 Multi-Domain Subsystem: Not Supported 00:28:53.520 Fixed Capacity Management: Not Supported 00:28:53.520 Variable Capacity Management: Not Supported 00:28:53.520 Delete Endurance Group: Not Supported 00:28:53.520 Delete NVM Set: Not Supported 00:28:53.520 Extended LBA Formats Supported: Not Supported 00:28:53.520 Flexible Data Placement Supported: Not Supported 00:28:53.520 00:28:53.520 Controller Memory Buffer Support 00:28:53.520 ================================ 00:28:53.520 Supported: No 00:28:53.520 00:28:53.520 Persistent Memory Region Support 00:28:53.520 ================================ 00:28:53.520 Supported: No 00:28:53.520 00:28:53.520 Admin Command Set Attributes 00:28:53.520 ============================ 00:28:53.520 Security Send/Receive: Not Supported 00:28:53.520 Format NVM: Not Supported 00:28:53.520 Firmware Activate/Download: Not Supported 00:28:53.520 Namespace Management: Not Supported 00:28:53.520 Device Self-Test: Not Supported 00:28:53.520 Directives: Not Supported 00:28:53.520 NVMe-MI: Not Supported 00:28:53.520 Virtualization Management: Not Supported 00:28:53.520 Doorbell Buffer Config: Not Supported 00:28:53.520 Get LBA Status Capability: Not Supported 00:28:53.520 Command & Feature Lockdown Capability: Not Supported 00:28:53.520 Abort Command Limit: 4 00:28:53.520 Async Event Request Limit: 4 00:28:53.520 Number of Firmware Slots: N/A 00:28:53.520 Firmware Slot 1 Read-Only: N/A 00:28:53.520 Firmware Activation Without Reset: N/A 00:28:53.520 Multiple Update Detection Support: N/A 00:28:53.520 Firmware Update Granularity: No Information Provided 00:28:53.520 Per-Namespace SMART Log: Yes 00:28:53.520 Asymmetric Namespace Access Log Page: Supported 00:28:53.520 ANA Transition Time : 10 sec 00:28:53.520 00:28:53.520 Asymmetric Namespace Access Capabilities 00:28:53.520 ANA Optimized State : Supported 00:28:53.520 ANA Non-Optimized State : Supported 00:28:53.520 ANA Inaccessible State : Supported 00:28:53.520 ANA Persistent Loss State : Supported 00:28:53.520 ANA Change State : Supported 00:28:53.520 ANAGRPID is not changed : No 00:28:53.520 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:53.520 00:28:53.520 ANA Group Identifier Maximum : 128 00:28:53.520 Number of ANA Group Identifiers : 128 00:28:53.520 Max Number of Allowed Namespaces : 1024 00:28:53.520 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:53.520 Command Effects Log Page: Supported 00:28:53.520 Get Log Page Extended Data: Supported 00:28:53.520 Telemetry Log Pages: Not Supported 00:28:53.520 Persistent Event Log Pages: Not Supported 00:28:53.520 Supported Log Pages Log Page: May Support 00:28:53.520 Commands Supported & Effects Log Page: Not Supported 00:28:53.520 Feature Identifiers & Effects Log Page:May Support 00:28:53.520 NVMe-MI Commands & Effects Log Page: May Support 00:28:53.520 Data Area 4 for Telemetry Log: Not Supported 00:28:53.520 Error Log Page Entries Supported: 128 00:28:53.520 Keep Alive: Supported 00:28:53.520 Keep Alive Granularity: 1000 ms 00:28:53.520 00:28:53.520 NVM Command Set Attributes 00:28:53.520 ========================== 00:28:53.520 Submission Queue Entry Size 00:28:53.520 Max: 64 00:28:53.520 Min: 64 00:28:53.520 Completion Queue Entry Size 00:28:53.520 Max: 16 00:28:53.520 Min: 16 00:28:53.520 Number of Namespaces: 1024 00:28:53.520 Compare Command: Not Supported 00:28:53.521 Write Uncorrectable Command: Not Supported 00:28:53.521 Dataset Management Command: Supported 00:28:53.521 Write Zeroes Command: Supported 00:28:53.521 Set Features Save Field: Not Supported 00:28:53.521 Reservations: Not Supported 00:28:53.521 Timestamp: Not Supported 00:28:53.521 Copy: Not Supported 00:28:53.521 Volatile Write Cache: Present 00:28:53.521 Atomic Write Unit (Normal): 1 00:28:53.521 Atomic Write Unit (PFail): 1 00:28:53.521 Atomic Compare & Write Unit: 1 00:28:53.521 Fused Compare & Write: Not Supported 00:28:53.521 Scatter-Gather List 00:28:53.521 SGL Command Set: Supported 00:28:53.521 SGL Keyed: Not Supported 00:28:53.521 SGL Bit Bucket Descriptor: Not Supported 00:28:53.521 SGL Metadata Pointer: Not Supported 00:28:53.521 Oversized SGL: Not Supported 00:28:53.521 SGL Metadata Address: Not Supported 00:28:53.521 SGL Offset: Supported 00:28:53.521 Transport SGL Data Block: Not Supported 00:28:53.521 Replay Protected Memory Block: Not Supported 00:28:53.521 00:28:53.521 Firmware Slot Information 00:28:53.521 ========================= 00:28:53.521 Active slot: 0 00:28:53.521 00:28:53.521 Asymmetric Namespace Access 00:28:53.521 =========================== 00:28:53.521 Change Count : 0 00:28:53.521 Number of ANA Group Descriptors : 1 00:28:53.521 ANA Group Descriptor : 0 00:28:53.521 ANA Group ID : 1 00:28:53.521 Number of NSID Values : 1 00:28:53.521 Change Count : 0 00:28:53.521 ANA State : 1 00:28:53.521 Namespace Identifier : 1 00:28:53.521 00:28:53.521 Commands Supported and Effects 00:28:53.521 ============================== 00:28:53.521 Admin Commands 00:28:53.521 -------------- 00:28:53.521 Get Log Page (02h): Supported 00:28:53.521 Identify (06h): Supported 00:28:53.521 Abort (08h): Supported 00:28:53.521 Set Features (09h): Supported 00:28:53.521 Get Features (0Ah): Supported 00:28:53.521 Asynchronous Event Request (0Ch): Supported 00:28:53.521 Keep Alive (18h): Supported 00:28:53.521 I/O Commands 00:28:53.521 ------------ 00:28:53.521 Flush (00h): Supported 00:28:53.521 Write (01h): Supported LBA-Change 00:28:53.521 Read (02h): Supported 00:28:53.521 Write Zeroes (08h): Supported LBA-Change 00:28:53.521 Dataset Management (09h): Supported 00:28:53.521 00:28:53.521 Error Log 00:28:53.521 ========= 00:28:53.521 Entry: 0 00:28:53.521 Error Count: 0x3 00:28:53.521 Submission Queue Id: 0x0 00:28:53.521 Command Id: 0x5 00:28:53.521 Phase Bit: 0 00:28:53.521 Status Code: 0x2 00:28:53.521 Status Code Type: 0x0 00:28:53.521 Do Not Retry: 1 00:28:53.521 Error Location: 0x28 00:28:53.521 LBA: 0x0 00:28:53.521 Namespace: 0x0 00:28:53.521 Vendor Log Page: 0x0 00:28:53.521 ----------- 00:28:53.521 Entry: 1 00:28:53.521 Error Count: 0x2 00:28:53.521 Submission Queue Id: 0x0 00:28:53.521 Command Id: 0x5 00:28:53.521 Phase Bit: 0 00:28:53.521 Status Code: 0x2 00:28:53.521 Status Code Type: 0x0 00:28:53.521 Do Not Retry: 1 00:28:53.521 Error Location: 0x28 00:28:53.521 LBA: 0x0 00:28:53.521 Namespace: 0x0 00:28:53.521 Vendor Log Page: 0x0 00:28:53.521 ----------- 00:28:53.521 Entry: 2 00:28:53.521 Error Count: 0x1 00:28:53.521 Submission Queue Id: 0x0 00:28:53.521 Command Id: 0x4 00:28:53.521 Phase Bit: 0 00:28:53.521 Status Code: 0x2 00:28:53.521 Status Code Type: 0x0 00:28:53.521 Do Not Retry: 1 00:28:53.521 Error Location: 0x28 00:28:53.521 LBA: 0x0 00:28:53.521 Namespace: 0x0 00:28:53.521 Vendor Log Page: 0x0 00:28:53.521 00:28:53.521 Number of Queues 00:28:53.521 ================ 00:28:53.521 Number of I/O Submission Queues: 128 00:28:53.521 Number of I/O Completion Queues: 128 00:28:53.521 00:28:53.521 ZNS Specific Controller Data 00:28:53.521 ============================ 00:28:53.521 Zone Append Size Limit: 0 00:28:53.521 00:28:53.521 00:28:53.521 Active Namespaces 00:28:53.521 ================= 00:28:53.521 get_feature(0x05) failed 00:28:53.521 Namespace ID:1 00:28:53.521 Command Set Identifier: NVM (00h) 00:28:53.521 Deallocate: Supported 00:28:53.521 Deallocated/Unwritten Error: Not Supported 00:28:53.521 Deallocated Read Value: Unknown 00:28:53.521 Deallocate in Write Zeroes: Not Supported 00:28:53.521 Deallocated Guard Field: 0xFFFF 00:28:53.521 Flush: Supported 00:28:53.521 Reservation: Not Supported 00:28:53.521 Namespace Sharing Capabilities: Multiple Controllers 00:28:53.521 Size (in LBAs): 3750748848 (1788GiB) 00:28:53.521 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:53.521 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:53.521 UUID: 124bfe6f-78a2-406a-8d21-98b5c1d33985 00:28:53.521 Thin Provisioning: Not Supported 00:28:53.521 Per-NS Atomic Units: Yes 00:28:53.521 Atomic Write Unit (Normal): 8 00:28:53.521 Atomic Write Unit (PFail): 8 00:28:53.521 Preferred Write Granularity: 8 00:28:53.521 Atomic Compare & Write Unit: 8 00:28:53.521 Atomic Boundary Size (Normal): 0 00:28:53.521 Atomic Boundary Size (PFail): 0 00:28:53.521 Atomic Boundary Offset: 0 00:28:53.521 NGUID/EUI64 Never Reused: No 00:28:53.521 ANA group ID: 1 00:28:53.521 Namespace Write Protected: No 00:28:53.521 Number of LBA Formats: 1 00:28:53.521 Current LBA Format: LBA Format #00 00:28:53.521 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:53.521 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.521 rmmod nvme_tcp 00:28:53.521 rmmod nvme_fabrics 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.521 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:56.068 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:00.280 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:00.280 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:00.280 00:29:00.280 real 0m20.820s 00:29:00.280 user 0m5.820s 00:29:00.280 sys 0m12.081s 00:29:00.280 11:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.280 11:23:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.280 ************************************ 00:29:00.280 END TEST nvmf_identify_kernel_target 00:29:00.280 ************************************ 00:29:00.280 11:23:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:00.280 11:23:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:00.280 11:23:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.280 11:23:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.280 ************************************ 00:29:00.280 START TEST nvmf_auth_host 00:29:00.280 ************************************ 00:29:00.280 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:00.542 * Looking for test storage... 00:29:00.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.542 --rc genhtml_branch_coverage=1 00:29:00.542 --rc genhtml_function_coverage=1 00:29:00.542 --rc genhtml_legend=1 00:29:00.542 --rc geninfo_all_blocks=1 00:29:00.542 --rc geninfo_unexecuted_blocks=1 00:29:00.542 00:29:00.542 ' 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.542 --rc genhtml_branch_coverage=1 00:29:00.542 --rc genhtml_function_coverage=1 00:29:00.542 --rc genhtml_legend=1 00:29:00.542 --rc geninfo_all_blocks=1 00:29:00.542 --rc geninfo_unexecuted_blocks=1 00:29:00.542 00:29:00.542 ' 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.542 --rc genhtml_branch_coverage=1 00:29:00.542 --rc genhtml_function_coverage=1 00:29:00.542 --rc genhtml_legend=1 00:29:00.542 --rc geninfo_all_blocks=1 00:29:00.542 --rc geninfo_unexecuted_blocks=1 00:29:00.542 00:29:00.542 ' 00:29:00.542 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.542 --rc genhtml_branch_coverage=1 00:29:00.542 --rc genhtml_function_coverage=1 00:29:00.542 --rc genhtml_legend=1 00:29:00.542 --rc geninfo_all_blocks=1 00:29:00.543 --rc geninfo_unexecuted_blocks=1 00:29:00.543 00:29:00.543 ' 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.543 11:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:08.691 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:08.691 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:08.691 Found net devices under 0000:31:00.0: cvl_0_0 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:08.691 Found net devices under 0000:31:00.1: cvl_0_1 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.691 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.692 11:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:08.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:29:08.953 00:29:08.953 --- 10.0.0.2 ping statistics --- 00:29:08.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.953 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:29:08.953 00:29:08.953 --- 10.0.0.1 ping statistics --- 00:29:08.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.953 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=107591 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 107591 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 107591 ']' 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.953 11:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.896 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5acaee18e9655861bb0553992e0e624d 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Xim 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5acaee18e9655861bb0553992e0e624d 0 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5acaee18e9655861bb0553992e0e624d 0 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5acaee18e9655861bb0553992e0e624d 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Xim 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Xim 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Xim 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c920901d5425f7a231fc96685895f908a5d7fdc2cdfed68dbb0ed643ed6ee4c2 00:29:09.897 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tbe 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c920901d5425f7a231fc96685895f908a5d7fdc2cdfed68dbb0ed643ed6ee4c2 3 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c920901d5425f7a231fc96685895f908a5d7fdc2cdfed68dbb0ed643ed6ee4c2 3 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c920901d5425f7a231fc96685895f908a5d7fdc2cdfed68dbb0ed643ed6ee4c2 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tbe 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tbe 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.tbe 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1889b4ee0492845f7a2fa351363c355a912ece7e8fb6d9ef 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zr8 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1889b4ee0492845f7a2fa351363c355a912ece7e8fb6d9ef 0 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1889b4ee0492845f7a2fa351363c355a912ece7e8fb6d9ef 0 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1889b4ee0492845f7a2fa351363c355a912ece7e8fb6d9ef 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:10.158 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zr8 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zr8 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.zr8 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ef6c3b02f4a449a7c96518a5cc4febe695d6a16130f183e7 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0QX 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ef6c3b02f4a449a7c96518a5cc4febe695d6a16130f183e7 2 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ef6c3b02f4a449a7c96518a5cc4febe695d6a16130f183e7 2 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ef6c3b02f4a449a7c96518a5cc4febe695d6a16130f183e7 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0QX 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0QX 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0QX 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=06803e4474c2a63a5d21276f7ab6a855 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.X1v 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 06803e4474c2a63a5d21276f7ab6a855 1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 06803e4474c2a63a5d21276f7ab6a855 1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=06803e4474c2a63a5d21276f7ab6a855 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.X1v 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.X1v 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.X1v 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2dc5f4775c783e4d8322e2a29d9a907 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XIj 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2dc5f4775c783e4d8322e2a29d9a907 1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2dc5f4775c783e4d8322e2a29d9a907 1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2dc5f4775c783e4d8322e2a29d9a907 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:10.159 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XIj 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XIj 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.XIj 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0e571f479a961121094d48a16dad496b8dad7be87d6fe858 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PwD 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0e571f479a961121094d48a16dad496b8dad7be87d6fe858 2 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0e571f479a961121094d48a16dad496b8dad7be87d6fe858 2 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0e571f479a961121094d48a16dad496b8dad7be87d6fe858 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PwD 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PwD 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.PwD 00:29:10.420 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6f96e1a373ddf55609d70ad648884c7f 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5UM 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6f96e1a373ddf55609d70ad648884c7f 0 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6f96e1a373ddf55609d70ad648884c7f 0 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6f96e1a373ddf55609d70ad648884c7f 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5UM 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5UM 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5UM 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=df28b903b177935239d3a2babcb1c4c422d85091090d00a8254fcb0658a45ee1 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bYO 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key df28b903b177935239d3a2babcb1c4c422d85091090d00a8254fcb0658a45ee1 3 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 df28b903b177935239d3a2babcb1c4c422d85091090d00a8254fcb0658a45ee1 3 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=df28b903b177935239d3a2babcb1c4c422d85091090d00a8254fcb0658a45ee1 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bYO 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bYO 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bYO 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 107591 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 107591 ']' 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.421 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xim 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.tbe ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tbe 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zr8 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0QX ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0QX 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.X1v 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.XIj ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XIj 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.PwD 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5UM ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5UM 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bYO 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:10.683 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:10.684 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:10.684 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:10.684 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:10.684 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:10.684 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:10.684 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:10.684 11:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:10.684 11:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:10.684 11:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:14.895 Waiting for block devices as requested 00:29:14.895 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:14.895 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:14.895 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:14.895 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:14.895 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:14.895 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:15.156 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:15.156 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:15.156 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:15.416 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:15.416 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:15.416 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:15.678 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:15.678 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:15.678 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:15.939 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:15.939 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:16.883 11:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:16.883 No valid GPT data, bailing 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:29:16.883 00:29:16.883 Discovery Log Number of Records 2, Generation counter 2 00:29:16.883 =====Discovery Log Entry 0====== 00:29:16.883 trtype: tcp 00:29:16.883 adrfam: ipv4 00:29:16.883 subtype: current discovery subsystem 00:29:16.883 treq: not specified, sq flow control disable supported 00:29:16.883 portid: 1 00:29:16.883 trsvcid: 4420 00:29:16.883 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:16.883 traddr: 10.0.0.1 00:29:16.883 eflags: none 00:29:16.883 sectype: none 00:29:16.883 =====Discovery Log Entry 1====== 00:29:16.883 trtype: tcp 00:29:16.883 adrfam: ipv4 00:29:16.883 subtype: nvme subsystem 00:29:16.883 treq: not specified, sq flow control disable supported 00:29:16.883 portid: 1 00:29:16.883 trsvcid: 4420 00:29:16.883 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:16.883 traddr: 10.0.0.1 00:29:16.883 eflags: none 00:29:16.883 sectype: none 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.883 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.884 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.144 nvme0n1 00:29:17.144 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.144 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.145 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.406 nvme0n1 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.406 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.407 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.407 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.407 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.407 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.407 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.667 nvme0n1 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.667 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.668 11:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.929 nvme0n1 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.929 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:17.930 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.930 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.930 nvme0n1 00:29:17.930 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.930 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.930 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.930 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.930 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.930 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.192 nvme0n1 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.192 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.454 nvme0n1 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.454 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.455 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.455 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.455 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.455 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.455 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.716 11:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.716 nvme0n1 00:29:18.716 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.716 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.716 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.716 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.716 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.716 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.977 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.978 nvme0n1 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.978 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.238 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.238 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.238 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.239 nvme0n1 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.239 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.500 nvme0n1 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.500 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:19.761 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.762 11:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.024 nvme0n1 00:29:20.024 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.024 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.024 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.024 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.024 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.024 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.024 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.025 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.288 nvme0n1 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.288 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.289 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.550 nvme0n1 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.550 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.811 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.811 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.812 11:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.073 nvme0n1 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.073 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.074 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.335 nvme0n1 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.335 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.336 11:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.914 nvme0n1 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.914 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.640 nvme0n1 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:22.640 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.641 11:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.904 nvme0n1 00:29:22.904 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.905 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.165 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.166 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.427 nvme0n1 00:29:23.427 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.427 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.427 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.427 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.427 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.427 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.689 11:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.950 nvme0n1 00:29:23.950 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.950 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.950 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.950 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.950 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.950 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.211 11:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.786 nvme0n1 00:29:24.786 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.786 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.787 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.787 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.787 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.787 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.052 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.053 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.053 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.053 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.053 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:25.053 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.053 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.623 nvme0n1 00:29:25.623 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.623 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.623 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.623 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.623 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.623 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.884 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.884 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.884 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.884 11:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.884 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.456 nvme0n1 00:29:26.456 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.456 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.456 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.456 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.456 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.456 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.717 11:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.288 nvme0n1 00:29:27.288 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.288 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.288 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.288 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.288 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.288 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.549 11:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.121 nvme0n1 00:29:28.121 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.121 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.121 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.121 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.121 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.121 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.382 nvme0n1 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.382 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.643 nvme0n1 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.643 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.644 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.644 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:28.644 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.644 11:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.905 nvme0n1 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.905 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.166 nvme0n1 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.166 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.167 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.427 nvme0n1 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:29.427 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.428 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.689 nvme0n1 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.689 11:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.949 nvme0n1 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.949 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.950 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.211 nvme0n1 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.211 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.473 nvme0n1 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.473 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.734 nvme0n1 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:30.734 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.735 11:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:30.735 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:30.735 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:30.735 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.735 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.735 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.995 nvme0n1 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:30.995 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.256 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.518 nvme0n1 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.518 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.779 nvme0n1 00:29:31.779 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.779 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.779 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.779 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.779 11:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.779 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.780 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.041 nvme0n1 00:29:32.041 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.041 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.041 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.041 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.041 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.041 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.301 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.301 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.301 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.301 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.302 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.563 nvme0n1 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:32.563 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.564 11:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.135 nvme0n1 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.135 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.708 nvme0n1 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.708 11:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.280 nvme0n1 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.280 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.852 nvme0n1 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.852 11:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.852 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.424 nvme0n1 00:29:35.424 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.424 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.424 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.424 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.424 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.424 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.425 11:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.997 nvme0n1 00:29:35.997 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.997 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.997 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.997 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.997 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.997 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.258 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.259 11:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.830 nvme0n1 00:29:36.830 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.830 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.830 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.830 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.830 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.830 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:37.090 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.091 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.663 nvme0n1 00:29:37.663 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.663 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.663 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.663 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.663 11:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.663 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:37.924 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:37.925 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.925 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.925 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.497 nvme0n1 00:29:38.497 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.497 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.497 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.497 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.497 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.757 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.758 11:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.329 nvme0n1 00:29:39.329 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.329 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.329 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.329 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.329 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.590 nvme0n1 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.590 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.852 11:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.852 nvme0n1 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.852 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:40.114 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.115 nvme0n1 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.115 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.378 nvme0n1 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.378 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.640 nvme0n1 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.640 11:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.902 nvme0n1 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.902 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.903 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.903 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.903 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:40.903 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.903 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.164 nvme0n1 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.164 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.425 nvme0n1 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:41.425 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.426 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.687 nvme0n1 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.687 11:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:41.687 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.688 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.948 nvme0n1 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:41.948 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.949 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.520 nvme0n1 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:42.520 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.521 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.781 nvme0n1 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.781 11:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.781 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.782 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.042 nvme0n1 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:43.042 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.043 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.303 nvme0n1 00:29:43.563 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.563 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.563 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.563 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.564 11:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.833 nvme0n1 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.833 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.404 nvme0n1 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:44.404 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.405 11:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.976 nvme0n1 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.976 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.547 nvme0n1 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.547 11:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.118 nvme0n1 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.118 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.378 nvme0n1 00:29:46.378 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.379 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.379 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.379 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.379 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjYWVlMThlOTY1NTg2MWJiMDU1Mzk5MmUwZTYyNGQq4mKP: 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: ]] 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkyMDkwMWQ1NDI1ZjdhMjMxZmM5NjY4NTg5NWY5MDhhNWQ3ZmRjMmNkZmVkNjhkYmIwZWQ2NDNlZDZlZTRjMowSFGs=: 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.639 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.640 11:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.211 nvme0n1 00:29:47.211 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.211 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.211 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.211 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.211 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.472 11:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.043 nvme0n1 00:29:48.043 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:48.303 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.304 11:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.244 nvme0n1 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU1NzFmNDc5YTk2MTEyMTA5NGQ0OGExNmRhZDQ5NmI4ZGFkN2JlODdkNmZlODU4PB3CxA==: 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: ]] 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmY5NmUxYTM3M2RkZjU1NjA5ZDcwYWQ2NDg4ODRjN2bRQS7p: 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.244 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.245 11:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.819 nvme0n1 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYyOGI5MDNiMTc3OTM1MjM5ZDNhMmJhYmNiMWM0YzQyMmQ4NTA5MTA5MGQwMGE4MjU0ZmNiMDY1OGE0NWVlMXTg5Qw=: 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.819 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.764 nvme0n1 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.764 11:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.764 request: 00:29:50.764 { 00:29:50.764 "name": "nvme0", 00:29:50.764 "trtype": "tcp", 00:29:50.764 "traddr": "10.0.0.1", 00:29:50.764 "adrfam": "ipv4", 00:29:50.764 "trsvcid": "4420", 00:29:50.764 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:50.764 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:50.764 "prchk_reftag": false, 00:29:50.764 "prchk_guard": false, 00:29:50.764 "hdgst": false, 00:29:50.764 "ddgst": false, 00:29:50.764 "allow_unrecognized_csi": false, 00:29:50.764 "method": "bdev_nvme_attach_controller", 00:29:50.764 "req_id": 1 00:29:50.764 } 00:29:50.764 Got JSON-RPC error response 00:29:50.764 response: 00:29:50.764 { 00:29:50.764 "code": -5, 00:29:50.764 "message": "Input/output error" 00:29:50.764 } 00:29:50.764 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:50.764 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.765 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.027 request: 00:29:51.027 { 00:29:51.027 "name": "nvme0", 00:29:51.027 "trtype": "tcp", 00:29:51.027 "traddr": "10.0.0.1", 00:29:51.027 "adrfam": "ipv4", 00:29:51.027 "trsvcid": "4420", 00:29:51.027 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:51.027 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:51.027 "prchk_reftag": false, 00:29:51.027 "prchk_guard": false, 00:29:51.027 "hdgst": false, 00:29:51.027 "ddgst": false, 00:29:51.027 "dhchap_key": "key2", 00:29:51.027 "allow_unrecognized_csi": false, 00:29:51.027 "method": "bdev_nvme_attach_controller", 00:29:51.027 "req_id": 1 00:29:51.027 } 00:29:51.027 Got JSON-RPC error response 00:29:51.027 response: 00:29:51.027 { 00:29:51.027 "code": -5, 00:29:51.027 "message": "Input/output error" 00:29:51.027 } 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.027 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.028 request: 00:29:51.028 { 00:29:51.028 "name": "nvme0", 00:29:51.028 "trtype": "tcp", 00:29:51.028 "traddr": "10.0.0.1", 00:29:51.028 "adrfam": "ipv4", 00:29:51.028 "trsvcid": "4420", 00:29:51.028 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:51.028 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:51.028 "prchk_reftag": false, 00:29:51.028 "prchk_guard": false, 00:29:51.028 "hdgst": false, 00:29:51.028 "ddgst": false, 00:29:51.028 "dhchap_key": "key1", 00:29:51.028 "dhchap_ctrlr_key": "ckey2", 00:29:51.028 "allow_unrecognized_csi": false, 00:29:51.028 "method": "bdev_nvme_attach_controller", 00:29:51.028 "req_id": 1 00:29:51.028 } 00:29:51.028 Got JSON-RPC error response 00:29:51.028 response: 00:29:51.028 { 00:29:51.028 "code": -5, 00:29:51.028 "message": "Input/output error" 00:29:51.028 } 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.028 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.289 nvme0n1 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.289 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.290 request: 00:29:51.290 { 00:29:51.290 "name": "nvme0", 00:29:51.290 "dhchap_key": "key1", 00:29:51.290 "dhchap_ctrlr_key": "ckey2", 00:29:51.290 "method": "bdev_nvme_set_keys", 00:29:51.290 "req_id": 1 00:29:51.290 } 00:29:51.290 Got JSON-RPC error response 00:29:51.290 response: 00:29:51.290 { 00:29:51.290 "code": -13, 00:29:51.290 "message": "Permission denied" 00:29:51.290 } 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:51.290 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.551 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:51.551 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.551 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.551 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.551 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:51.551 11:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:52.492 11:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.492 11:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:52.492 11:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.492 11:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.492 11:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.492 11:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:52.492 11:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg4OWI0ZWUwNDkyODQ1ZjdhMmZhMzUxMzYzYzM1NWE5MTJlY2U3ZThmYjZkOWVmYY8TVA==: 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: ]] 00:29:53.433 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWY2YzNiMDJmNGE0NDlhN2M5NjUxOGE1Y2M0ZmViZTY5NWQ2YTE2MTMwZjE4M2U3HIk2bw==: 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.693 nvme0n1 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY4MDNlNDQ3NGMyYTYzYTVkMjEyNzZmN2FiNmE4NTXb2wah: 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: ]] 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTJkYzVmNDc3NWM3ODNlNGQ4MzIyZTJhMjlkOWE5MDfHM4J6: 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.693 11:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.693 request: 00:29:53.693 { 00:29:53.693 "name": "nvme0", 00:29:53.693 "dhchap_key": "key2", 00:29:53.693 "dhchap_ctrlr_key": "ckey1", 00:29:53.693 "method": "bdev_nvme_set_keys", 00:29:53.693 "req_id": 1 00:29:53.693 } 00:29:53.693 Got JSON-RPC error response 00:29:53.693 response: 00:29:53.693 { 00:29:53.693 "code": -13, 00:29:53.693 "message": "Permission denied" 00:29:53.693 } 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.693 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.953 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:53.953 11:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.892 rmmod nvme_tcp 00:29:54.892 rmmod nvme_fabrics 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 107591 ']' 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 107591 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 107591 ']' 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 107591 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107591 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107591' 00:29:54.892 killing process with pid 107591 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 107591 00:29:54.892 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 107591 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.153 11:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.063 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.063 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:57.063 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:57.063 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:57.063 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:57.063 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:57.323 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:57.323 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:57.323 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:57.323 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:57.323 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:57.323 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:57.323 11:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:01.527 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:01.527 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:01.787 11:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Xim /tmp/spdk.key-null.zr8 /tmp/spdk.key-sha256.X1v /tmp/spdk.key-sha384.PwD /tmp/spdk.key-sha512.bYO /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:01.787 11:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:06.140 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:06.140 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:06.140 00:30:06.140 real 1m5.703s 00:30:06.140 user 0m58.234s 00:30:06.140 sys 0m17.539s 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.140 ************************************ 00:30:06.140 END TEST nvmf_auth_host 00:30:06.140 ************************************ 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.140 ************************************ 00:30:06.140 START TEST nvmf_digest 00:30:06.140 ************************************ 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:06.140 * Looking for test storage... 00:30:06.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.140 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.402 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.403 --rc genhtml_branch_coverage=1 00:30:06.403 --rc genhtml_function_coverage=1 00:30:06.403 --rc genhtml_legend=1 00:30:06.403 --rc geninfo_all_blocks=1 00:30:06.403 --rc geninfo_unexecuted_blocks=1 00:30:06.403 00:30:06.403 ' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.403 --rc genhtml_branch_coverage=1 00:30:06.403 --rc genhtml_function_coverage=1 00:30:06.403 --rc genhtml_legend=1 00:30:06.403 --rc geninfo_all_blocks=1 00:30:06.403 --rc geninfo_unexecuted_blocks=1 00:30:06.403 00:30:06.403 ' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.403 --rc genhtml_branch_coverage=1 00:30:06.403 --rc genhtml_function_coverage=1 00:30:06.403 --rc genhtml_legend=1 00:30:06.403 --rc geninfo_all_blocks=1 00:30:06.403 --rc geninfo_unexecuted_blocks=1 00:30:06.403 00:30:06.403 ' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.403 --rc genhtml_branch_coverage=1 00:30:06.403 --rc genhtml_function_coverage=1 00:30:06.403 --rc genhtml_legend=1 00:30:06.403 --rc geninfo_all_blocks=1 00:30:06.403 --rc geninfo_unexecuted_blocks=1 00:30:06.403 00:30:06.403 ' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:06.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.403 11:24:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:14.548 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:14.549 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:14.549 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:14.549 Found net devices under 0000:31:00.0: cvl_0_0 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:14.549 Found net devices under 0000:31:00.1: cvl_0_1 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:30:14.549 00:30:14.549 --- 10.0.0.2 ping statistics --- 00:30:14.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.549 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:30:14.549 00:30:14.549 --- 10.0.0.1 ping statistics --- 00:30:14.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.549 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.549 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:14.550 ************************************ 00:30:14.550 START TEST nvmf_digest_clean 00:30:14.550 ************************************ 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=126777 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 126777 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 126777 ']' 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:14.550 11:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:14.550 [2024-11-19 11:24:22.460471] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:14.550 [2024-11-19 11:24:22.460521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.550 [2024-11-19 11:24:22.544683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.550 [2024-11-19 11:24:22.578885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.550 [2024-11-19 11:24:22.578915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.550 [2024-11-19 11:24:22.578924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.550 [2024-11-19 11:24:22.578930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.550 [2024-11-19 11:24:22.578936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.550 [2024-11-19 11:24:22.579508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:15.122 null0 00:30:15.122 [2024-11-19 11:24:23.339810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.122 [2024-11-19 11:24:23.364003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=126871 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 126871 /var/tmp/bperf.sock 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 126871 ']' 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:15.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.122 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:15.122 [2024-11-19 11:24:23.394562] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:15.122 [2024-11-19 11:24:23.394598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126871 ] 00:30:15.384 [2024-11-19 11:24:23.479747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.384 [2024-11-19 11:24:23.516253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.384 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.384 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:15.384 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:15.384 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:15.384 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:15.645 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:15.645 11:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:15.645 nvme0n1 00:30:15.905 11:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:15.905 11:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:15.905 Running I/O for 2 seconds... 00:30:17.788 19432.00 IOPS, 75.91 MiB/s [2024-11-19T10:24:26.140Z] 19621.50 IOPS, 76.65 MiB/s 00:30:17.788 Latency(us) 00:30:17.788 [2024-11-19T10:24:26.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.788 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:17.788 nvme0n1 : 2.00 19645.23 76.74 0.00 0.00 6508.20 2976.43 14745.60 00:30:17.788 [2024-11-19T10:24:26.140Z] =================================================================================================================== 00:30:17.788 [2024-11-19T10:24:26.140Z] Total : 19645.23 76.74 0.00 0.00 6508.20 2976.43 14745.60 00:30:17.788 { 00:30:17.788 "results": [ 00:30:17.788 { 00:30:17.788 "job": "nvme0n1", 00:30:17.788 "core_mask": "0x2", 00:30:17.788 "workload": "randread", 00:30:17.788 "status": "finished", 00:30:17.788 "queue_depth": 128, 00:30:17.788 "io_size": 4096, 00:30:17.788 "runtime": 2.0041, 00:30:17.788 "iops": 19645.22728406766, 00:30:17.788 "mibps": 76.7391690783893, 00:30:17.788 "io_failed": 0, 00:30:17.788 "io_timeout": 0, 00:30:17.788 "avg_latency_us": 6508.195930337896, 00:30:17.788 "min_latency_us": 2976.4266666666667, 00:30:17.788 "max_latency_us": 14745.6 00:30:17.788 } 00:30:17.788 ], 00:30:17.788 "core_count": 1 00:30:17.788 } 00:30:17.788 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:17.788 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:17.788 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:17.788 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:17.788 | select(.opcode=="crc32c") 00:30:17.788 | "\(.module_name) \(.executed)"' 00:30:17.788 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 126871 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 126871 ']' 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 126871 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126871 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126871' 00:30:18.049 killing process with pid 126871 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 126871 00:30:18.049 Received shutdown signal, test time was about 2.000000 seconds 00:30:18.049 00:30:18.049 Latency(us) 00:30:18.049 [2024-11-19T10:24:26.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.049 [2024-11-19T10:24:26.401Z] =================================================================================================================== 00:30:18.049 [2024-11-19T10:24:26.401Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.049 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 126871 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=127485 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 127485 /var/tmp/bperf.sock 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 127485 ']' 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:18.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.311 11:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:18.311 [2024-11-19 11:24:26.534947] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:18.311 [2024-11-19 11:24:26.535006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127485 ] 00:30:18.311 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:18.311 Zero copy mechanism will not be used. 00:30:18.311 [2024-11-19 11:24:26.630147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.572 [2024-11-19 11:24:26.664948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.144 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.144 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:19.144 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:19.144 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:19.144 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:19.405 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:19.405 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:19.666 nvme0n1 00:30:19.666 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:19.666 11:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:19.666 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:19.666 Zero copy mechanism will not be used. 00:30:19.666 Running I/O for 2 seconds... 00:30:21.550 3697.00 IOPS, 462.12 MiB/s [2024-11-19T10:24:29.902Z] 3463.50 IOPS, 432.94 MiB/s 00:30:21.550 Latency(us) 00:30:21.550 [2024-11-19T10:24:29.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.550 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:21.550 nvme0n1 : 2.00 3466.19 433.27 0.00 0.00 4612.07 699.73 8028.16 00:30:21.550 [2024-11-19T10:24:29.902Z] =================================================================================================================== 00:30:21.550 [2024-11-19T10:24:29.902Z] Total : 3466.19 433.27 0.00 0.00 4612.07 699.73 8028.16 00:30:21.550 { 00:30:21.550 "results": [ 00:30:21.550 { 00:30:21.550 "job": "nvme0n1", 00:30:21.550 "core_mask": "0x2", 00:30:21.550 "workload": "randread", 00:30:21.550 "status": "finished", 00:30:21.550 "queue_depth": 16, 00:30:21.550 "io_size": 131072, 00:30:21.550 "runtime": 2.003061, 00:30:21.550 "iops": 3466.1949885699937, 00:30:21.550 "mibps": 433.2743735712492, 00:30:21.550 "io_failed": 0, 00:30:21.550 "io_timeout": 0, 00:30:21.550 "avg_latency_us": 4612.071861347161, 00:30:21.550 "min_latency_us": 699.7333333333333, 00:30:21.550 "max_latency_us": 8028.16 00:30:21.550 } 00:30:21.550 ], 00:30:21.550 "core_count": 1 00:30:21.550 } 00:30:21.812 11:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:21.812 11:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:21.812 11:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:21.812 11:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:21.812 | select(.opcode=="crc32c") 00:30:21.812 | "\(.module_name) \(.executed)"' 00:30:21.812 11:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 127485 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 127485 ']' 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 127485 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127485 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127485' 00:30:21.812 killing process with pid 127485 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 127485 00:30:21.812 Received shutdown signal, test time was about 2.000000 seconds 00:30:21.812 00:30:21.812 Latency(us) 00:30:21.812 [2024-11-19T10:24:30.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.812 [2024-11-19T10:24:30.164Z] =================================================================================================================== 00:30:21.812 [2024-11-19T10:24:30.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.812 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 127485 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=128188 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 128188 /var/tmp/bperf.sock 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 128188 ']' 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:22.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.074 11:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:22.074 [2024-11-19 11:24:30.311713] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:22.074 [2024-11-19 11:24:30.311768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128188 ] 00:30:22.074 [2024-11-19 11:24:30.401763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.336 [2024-11-19 11:24:30.431209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.908 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.908 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:22.908 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:22.909 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:22.909 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:23.170 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:23.170 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:23.431 nvme0n1 00:30:23.431 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:23.431 11:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:23.692 Running I/O for 2 seconds... 00:30:25.583 21585.00 IOPS, 84.32 MiB/s [2024-11-19T10:24:33.935Z] 21636.50 IOPS, 84.52 MiB/s 00:30:25.583 Latency(us) 00:30:25.583 [2024-11-19T10:24:33.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.583 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:25.583 nvme0n1 : 2.01 21653.10 84.58 0.00 0.00 5903.24 1993.39 10485.76 00:30:25.583 [2024-11-19T10:24:33.935Z] =================================================================================================================== 00:30:25.583 [2024-11-19T10:24:33.935Z] Total : 21653.10 84.58 0.00 0.00 5903.24 1993.39 10485.76 00:30:25.583 { 00:30:25.583 "results": [ 00:30:25.583 { 00:30:25.583 "job": "nvme0n1", 00:30:25.583 "core_mask": "0x2", 00:30:25.583 "workload": "randwrite", 00:30:25.583 "status": "finished", 00:30:25.583 "queue_depth": 128, 00:30:25.583 "io_size": 4096, 00:30:25.583 "runtime": 2.007334, 00:30:25.583 "iops": 21653.098089306513, 00:30:25.583 "mibps": 84.58241441135357, 00:30:25.583 "io_failed": 0, 00:30:25.583 "io_timeout": 0, 00:30:25.583 "avg_latency_us": 5903.2386309291, 00:30:25.583 "min_latency_us": 1993.3866666666668, 00:30:25.583 "max_latency_us": 10485.76 00:30:25.583 } 00:30:25.583 ], 00:30:25.583 "core_count": 1 00:30:25.583 } 00:30:25.583 11:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:25.584 11:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:25.584 11:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:25.584 11:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:25.584 | select(.opcode=="crc32c") 00:30:25.584 | "\(.module_name) \(.executed)"' 00:30:25.584 11:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 128188 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 128188 ']' 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 128188 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128188 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128188' 00:30:25.845 killing process with pid 128188 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 128188 00:30:25.845 Received shutdown signal, test time was about 2.000000 seconds 00:30:25.845 00:30:25.845 Latency(us) 00:30:25.845 [2024-11-19T10:24:34.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.845 [2024-11-19T10:24:34.197Z] =================================================================================================================== 00:30:25.845 [2024-11-19T10:24:34.197Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 128188 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=129024 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 129024 /var/tmp/bperf.sock 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 129024 ']' 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:25.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:25.845 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.108 11:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:26.108 [2024-11-19 11:24:34.241793] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:26.108 [2024-11-19 11:24:34.241851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129024 ] 00:30:26.108 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:26.108 Zero copy mechanism will not be used. 00:30:26.108 [2024-11-19 11:24:34.331410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.108 [2024-11-19 11:24:34.361054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.051 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.051 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:27.051 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:27.051 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:27.051 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:27.051 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.051 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.312 nvme0n1 00:30:27.312 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:27.312 11:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:27.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:27.573 Zero copy mechanism will not be used. 00:30:27.573 Running I/O for 2 seconds... 00:30:29.461 4839.00 IOPS, 604.88 MiB/s [2024-11-19T10:24:37.813Z] 5134.50 IOPS, 641.81 MiB/s 00:30:29.461 Latency(us) 00:30:29.461 [2024-11-19T10:24:37.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.461 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:29.461 nvme0n1 : 2.00 5130.18 641.27 0.00 0.00 3113.01 1542.83 6471.68 00:30:29.461 [2024-11-19T10:24:37.813Z] =================================================================================================================== 00:30:29.461 [2024-11-19T10:24:37.813Z] Total : 5130.18 641.27 0.00 0.00 3113.01 1542.83 6471.68 00:30:29.461 { 00:30:29.461 "results": [ 00:30:29.461 { 00:30:29.461 "job": "nvme0n1", 00:30:29.461 "core_mask": "0x2", 00:30:29.461 "workload": "randwrite", 00:30:29.461 "status": "finished", 00:30:29.461 "queue_depth": 16, 00:30:29.461 "io_size": 131072, 00:30:29.461 "runtime": 2.004803, 00:30:29.461 "iops": 5130.179873034906, 00:30:29.461 "mibps": 641.2724841293633, 00:30:29.461 "io_failed": 0, 00:30:29.461 "io_timeout": 0, 00:30:29.461 "avg_latency_us": 3113.0081218603145, 00:30:29.461 "min_latency_us": 1542.8266666666666, 00:30:29.461 "max_latency_us": 6471.68 00:30:29.461 } 00:30:29.461 ], 00:30:29.461 "core_count": 1 00:30:29.461 } 00:30:29.461 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:29.461 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:29.461 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:29.461 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:29.461 | select(.opcode=="crc32c") 00:30:29.461 | "\(.module_name) \(.executed)"' 00:30:29.461 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 129024 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 129024 ']' 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 129024 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129024 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129024' 00:30:29.723 killing process with pid 129024 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 129024 00:30:29.723 Received shutdown signal, test time was about 2.000000 seconds 00:30:29.723 00:30:29.723 Latency(us) 00:30:29.723 [2024-11-19T10:24:38.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.723 [2024-11-19T10:24:38.075Z] =================================================================================================================== 00:30:29.723 [2024-11-19T10:24:38.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.723 11:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 129024 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 126777 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 126777 ']' 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 126777 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126777 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126777' 00:30:29.984 killing process with pid 126777 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 126777 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 126777 00:30:29.984 00:30:29.984 real 0m15.876s 00:30:29.984 user 0m31.229s 00:30:29.984 sys 0m3.561s 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:29.984 ************************************ 00:30:29.984 END TEST nvmf_digest_clean 00:30:29.984 ************************************ 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.984 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:30.245 ************************************ 00:30:30.245 START TEST nvmf_digest_error 00:30:30.245 ************************************ 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=129903 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 129903 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:30.245 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 129903 ']' 00:30:30.246 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.246 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.246 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.246 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.246 11:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:30.246 [2024-11-19 11:24:38.422809] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:30.246 [2024-11-19 11:24:38.422909] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.246 [2024-11-19 11:24:38.516894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.246 [2024-11-19 11:24:38.556826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.246 [2024-11-19 11:24:38.556871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.246 [2024-11-19 11:24:38.556880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.246 [2024-11-19 11:24:38.556887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.246 [2024-11-19 11:24:38.556893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.246 [2024-11-19 11:24:38.557500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.188 [2024-11-19 11:24:39.259511] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.188 null0 00:30:31.188 [2024-11-19 11:24:39.341850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.188 [2024-11-19 11:24:39.366056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=129998 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 129998 /var/tmp/bperf.sock 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 129998 ']' 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:31.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.188 11:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.188 [2024-11-19 11:24:39.430776] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:31.188 [2024-11-19 11:24:39.430827] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129998 ] 00:30:31.188 [2024-11-19 11:24:39.519885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.450 [2024-11-19 11:24:39.550689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.020 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.020 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:32.020 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:32.020 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:32.280 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:32.280 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.280 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:32.280 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.280 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.280 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.541 nvme0n1 00:30:32.541 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:32.541 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.541 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:32.541 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.541 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:32.541 11:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:32.541 Running I/O for 2 seconds... 00:30:32.541 [2024-11-19 11:24:40.784171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.784202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.784211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.541 [2024-11-19 11:24:40.798008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.798028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.798035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.541 [2024-11-19 11:24:40.810722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.810740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.810747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.541 [2024-11-19 11:24:40.823694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.823712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.823719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.541 [2024-11-19 11:24:40.834736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.834754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.834767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.541 [2024-11-19 11:24:40.847702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.847719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.847726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.541 [2024-11-19 11:24:40.860140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.860157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.860163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.541 [2024-11-19 11:24:40.872839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.872855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.872866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.541 [2024-11-19 11:24:40.885369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.541 [2024-11-19 11:24:40.885386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.541 [2024-11-19 11:24:40.885393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.899445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.899462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.899468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.908982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.908999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.909005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.922189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.922206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.922213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.934854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.934882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.934889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.947587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.947605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.959521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.959538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.959545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.972995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.973012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.973018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.986359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.986376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.986383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:40.998536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:40.998553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:40.998559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:41.009019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:41.009036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-11-19 11:24:41.009043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.802 [2024-11-19 11:24:41.022561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.802 [2024-11-19 11:24:41.022579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.022585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.035619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.035636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.035643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.047201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.047219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.047229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.061442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.061459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.061465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.072050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.072067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.072073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.086154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.086171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.086178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.098698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.098716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.098722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.109125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.109143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.109149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.122161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.122178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.122185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.135669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.135686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.135692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.803 [2024-11-19 11:24:41.147791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:32.803 [2024-11-19 11:24:41.147807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-11-19 11:24:41.147814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.161433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.161453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.161460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.174170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.174187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.174193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.185592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.185609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.185615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.198432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.198448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.198455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.211767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.211784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.211791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.221730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.221747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.221754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.234109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.234127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.234133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.250080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.250097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.250104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.262936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.262953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.262959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.273521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.273538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.273544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.286192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.286209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.286215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.299104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.299121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.299128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.312260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.312277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.312283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.324383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.324400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.324407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.337032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.337049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.337056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.349908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.349925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.349932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.362565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.362582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.362589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.375094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.375110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.375120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.385436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.385453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.385460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.065 [2024-11-19 11:24:41.398384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.065 [2024-11-19 11:24:41.398402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.065 [2024-11-19 11:24:41.398408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.066 [2024-11-19 11:24:41.411742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.066 [2024-11-19 11:24:41.411759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.066 [2024-11-19 11:24:41.411766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.423737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.423755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.423761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.436044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.436062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.436068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.449876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.449893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.449900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.461936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.461953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.461959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.475178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.475195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.475202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.487605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.487622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.487629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.499131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.499149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.499156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.510949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.510966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.510972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.327 [2024-11-19 11:24:41.524645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.327 [2024-11-19 11:24:41.524661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.327 [2024-11-19 11:24:41.524668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.537551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.537568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.537574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.547913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.547930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.547936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.560842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.560859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.560870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.575440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.575457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.575463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.586929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.586945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.586955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.599798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.599815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.599821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.612579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.612595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.612602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.623803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.623820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.623826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.636868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.636885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.636892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.648567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.648585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.648591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.661748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.661765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.661772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.328 [2024-11-19 11:24:41.674985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.328 [2024-11-19 11:24:41.675002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.328 [2024-11-19 11:24:41.675008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.589 [2024-11-19 11:24:41.687314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.589 [2024-11-19 11:24:41.687331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.589 [2024-11-19 11:24:41.687338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.589 [2024-11-19 11:24:41.698703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.589 [2024-11-19 11:24:41.698723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.589 [2024-11-19 11:24:41.698730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.589 [2024-11-19 11:24:41.710744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.589 [2024-11-19 11:24:41.710761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.589 [2024-11-19 11:24:41.710767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.589 [2024-11-19 11:24:41.725359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.589 [2024-11-19 11:24:41.725376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.589 [2024-11-19 11:24:41.725383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.589 [2024-11-19 11:24:41.739837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.589 [2024-11-19 11:24:41.739854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.589 [2024-11-19 11:24:41.739860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.589 [2024-11-19 11:24:41.752324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.589 [2024-11-19 11:24:41.752340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.589 [2024-11-19 11:24:41.752346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.589 20122.00 IOPS, 78.60 MiB/s [2024-11-19T10:24:41.941Z] [2024-11-19 11:24:41.763790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.763807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.763813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.776788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.776805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.776812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.790083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.790099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.790106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.803042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.803058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.803064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.813870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.813886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.813892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.826071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.826088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.826095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.839311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.839328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.839334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.853051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.853068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.853074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.863970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.863986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.863993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.877294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.877311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.877318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.891105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.891122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.891128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.903325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.903342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.903348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.913679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.913696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.913706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.927381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.590 [2024-11-19 11:24:41.927398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.590 [2024-11-19 11:24:41.927404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.590 [2024-11-19 11:24:41.940013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.852 [2024-11-19 11:24:41.940030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.852 [2024-11-19 11:24:41.940038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.852 [2024-11-19 11:24:41.952698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.852 [2024-11-19 11:24:41.952716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.852 [2024-11-19 11:24:41.952722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.852 [2024-11-19 11:24:41.967056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.852 [2024-11-19 11:24:41.967073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.852 [2024-11-19 11:24:41.967079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.852 [2024-11-19 11:24:41.975968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.852 [2024-11-19 11:24:41.975985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.852 [2024-11-19 11:24:41.975991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.852 [2024-11-19 11:24:41.990625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.852 [2024-11-19 11:24:41.990642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.852 [2024-11-19 11:24:41.990649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.852 [2024-11-19 11:24:42.003365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.852 [2024-11-19 11:24:42.003382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.852 [2024-11-19 11:24:42.003389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.852 [2024-11-19 11:24:42.016602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.852 [2024-11-19 11:24:42.016619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.016625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.027484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.027501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.027507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.040284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.040301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.040308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.053506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.053523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.053529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.066179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.066195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.066202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.078444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.078461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.078467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.090072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.090089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.090095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.103276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.103292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.103299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.116341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.116357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.116364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.129060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.129077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.129086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.141560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.141577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.141584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.155130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.155146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.155152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.167012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.167028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.167034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.178921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.178938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.178945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.853 [2024-11-19 11:24:42.192995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:33.853 [2024-11-19 11:24:42.193012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.853 [2024-11-19 11:24:42.193018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.205029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.205046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.205053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.215320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.215337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.215344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.229117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.229133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.229140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.241844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.241868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.241875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.254085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.254102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.254108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.265587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.265603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.265609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.278414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.278431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.278437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.292499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.292516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.292522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.304688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.304705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.304711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.317295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.317312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.317319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.330831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.330848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.330855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.344802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.344819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.344825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.357031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.357047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.357054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.367272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.367289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.367295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.380016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.380032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.380039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.392944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.392962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.392968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.406585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.406602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.115 [2024-11-19 11:24:42.406609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.115 [2024-11-19 11:24:42.419838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.115 [2024-11-19 11:24:42.419855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.116 [2024-11-19 11:24:42.419865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.116 [2024-11-19 11:24:42.432889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.116 [2024-11-19 11:24:42.432905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.116 [2024-11-19 11:24:42.432912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.116 [2024-11-19 11:24:42.443501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.116 [2024-11-19 11:24:42.443518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.116 [2024-11-19 11:24:42.443525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.116 [2024-11-19 11:24:42.456079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.116 [2024-11-19 11:24:42.456096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.116 [2024-11-19 11:24:42.456109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.377 [2024-11-19 11:24:42.470326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.377 [2024-11-19 11:24:42.470344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.377 [2024-11-19 11:24:42.470352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.377 [2024-11-19 11:24:42.481387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.377 [2024-11-19 11:24:42.481404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.377 [2024-11-19 11:24:42.481410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.377 [2024-11-19 11:24:42.494334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.377 [2024-11-19 11:24:42.494350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.494356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.507023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.507040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.507048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.517649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.517666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.517672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.531551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.531568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.531574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.544586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.544603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.544609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.557278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.557294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.557300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.568166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.568183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.568191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.581434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.581450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.581457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.594802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.594819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.594825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.606924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.606941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.606947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.619242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.619259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.619265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.631480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.631496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.631502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.644707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.644724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.644731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.656742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.656759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.656765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.667867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.667883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.667893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.681246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.681263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.681269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.691859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.691880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.691886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.705172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.705189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.705195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.378 [2024-11-19 11:24:42.717837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.378 [2024-11-19 11:24:42.717854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.378 [2024-11-19 11:24:42.717860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.639 [2024-11-19 11:24:42.731124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.639 [2024-11-19 11:24:42.731142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.639 [2024-11-19 11:24:42.731148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.639 [2024-11-19 11:24:42.744244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.639 [2024-11-19 11:24:42.744261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.639 [2024-11-19 11:24:42.744268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.639 [2024-11-19 11:24:42.756548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.639 [2024-11-19 11:24:42.756565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.639 [2024-11-19 11:24:42.756572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.639 20237.00 IOPS, 79.05 MiB/s [2024-11-19T10:24:42.991Z] [2024-11-19 11:24:42.767742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22022f0) 00:30:34.639 [2024-11-19 11:24:42.767758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:34.639 [2024-11-19 11:24:42.767765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:34.639 00:30:34.639 Latency(us) 00:30:34.639 [2024-11-19T10:24:42.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.639 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:34.639 nvme0n1 : 2.01 20245.97 79.09 0.00 0.00 6313.86 2129.92 21408.43 00:30:34.639 [2024-11-19T10:24:42.991Z] =================================================================================================================== 00:30:34.639 [2024-11-19T10:24:42.991Z] Total : 20245.97 79.09 0.00 0.00 6313.86 2129.92 21408.43 00:30:34.639 { 00:30:34.639 "results": [ 00:30:34.639 { 00:30:34.639 "job": "nvme0n1", 00:30:34.639 "core_mask": "0x2", 00:30:34.640 "workload": "randread", 00:30:34.640 "status": "finished", 00:30:34.640 "queue_depth": 128, 00:30:34.640 "io_size": 4096, 00:30:34.640 "runtime": 2.005436, 00:30:34.640 "iops": 20245.971449599987, 00:30:34.640 "mibps": 79.08582597499995, 00:30:34.640 "io_failed": 0, 00:30:34.640 "io_timeout": 0, 00:30:34.640 "avg_latency_us": 6313.858940282087, 00:30:34.640 "min_latency_us": 2129.92, 00:30:34.640 "max_latency_us": 21408.426666666666 00:30:34.640 } 00:30:34.640 ], 00:30:34.640 "core_count": 1 00:30:34.640 } 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:34.640 | .driver_specific 00:30:34.640 | .nvme_error 00:30:34.640 | .status_code 00:30:34.640 | .command_transient_transport_error' 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 129998 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 129998 ']' 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 129998 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.640 11:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129998 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129998' 00:30:34.901 killing process with pid 129998 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 129998 00:30:34.901 Received shutdown signal, test time was about 2.000000 seconds 00:30:34.901 00:30:34.901 Latency(us) 00:30:34.901 [2024-11-19T10:24:43.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.901 [2024-11-19T10:24:43.253Z] =================================================================================================================== 00:30:34.901 [2024-11-19T10:24:43.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 129998 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=130789 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 130789 /var/tmp/bperf.sock 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 130789 ']' 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:34.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.901 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.901 [2024-11-19 11:24:43.192762] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:34.901 [2024-11-19 11:24:43.192819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130789 ] 00:30:34.901 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:34.901 Zero copy mechanism will not be used. 00:30:35.162 [2024-11-19 11:24:43.282386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.162 [2024-11-19 11:24:43.312011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.732 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.732 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:35.732 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:35.732 11:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:35.993 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:35.993 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.993 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.993 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.993 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:35.993 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.253 nvme0n1 00:30:36.253 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:36.253 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.253 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.253 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.253 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:36.253 11:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:36.516 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:36.516 Zero copy mechanism will not be used. 00:30:36.516 Running I/O for 2 seconds... 00:30:36.516 [2024-11-19 11:24:44.654961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.654994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.655002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.661485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.661507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.661514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.671335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.671354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.671361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.677869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.677888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.677895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.686326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.686346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.686353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.695267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.695286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.695293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.701690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.701710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.701717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.706891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.706908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.706915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.709694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.709712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.709719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.716770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.716789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.716795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.723785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.723803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.723810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.730473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.730492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.730499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.737694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.737712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.737719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.744118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.744136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.744143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.751851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.751881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.758168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.758186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.758193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.765504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.765523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.765533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.771082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.771100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.771106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.780544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.780562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.780569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.787965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.787983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.787989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.795775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.516 [2024-11-19 11:24:44.795793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.516 [2024-11-19 11:24:44.795799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.516 [2024-11-19 11:24:44.801443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.517 [2024-11-19 11:24:44.801461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.517 [2024-11-19 11:24:44.801467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.517 [2024-11-19 11:24:44.809755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.517 [2024-11-19 11:24:44.809774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.517 [2024-11-19 11:24:44.809780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.517 [2024-11-19 11:24:44.818264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.517 [2024-11-19 11:24:44.818283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.517 [2024-11-19 11:24:44.818289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.517 [2024-11-19 11:24:44.829130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.517 [2024-11-19 11:24:44.829149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.517 [2024-11-19 11:24:44.829155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.517 [2024-11-19 11:24:44.841809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.517 [2024-11-19 11:24:44.841831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.517 [2024-11-19 11:24:44.841837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.517 [2024-11-19 11:24:44.854395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.517 [2024-11-19 11:24:44.854414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.517 [2024-11-19 11:24:44.854420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.517 [2024-11-19 11:24:44.864738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.517 [2024-11-19 11:24:44.864756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.517 [2024-11-19 11:24:44.864763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.876846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.876870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.876877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.888397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.888415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.888422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.901258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.901276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.901282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.909802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.909820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.909827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.918783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.918801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.918808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.929084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.929102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.929108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.939850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.939873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.939880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.949203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.949222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.949228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.956264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.956282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.956289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.779 [2024-11-19 11:24:44.963618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.779 [2024-11-19 11:24:44.963637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.779 [2024-11-19 11:24:44.963644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:44.974303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:44.974321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:44.974327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:44.984695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:44.984714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:44.984720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:44.994208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:44.994226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:44.994232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.005189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.005207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.005214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.015579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.015597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.015607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.026254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.026272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.026278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.032987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.033006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.033012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.042499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.042517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.042524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.051329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.051347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.051353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.057139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.057157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.057164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.064059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.064077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.064083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.074370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.074388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.074394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.080932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.080950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.080956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.091962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.091980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.091987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.103718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.103737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.103743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.115780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.115799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.115806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.780 [2024-11-19 11:24:45.127284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:36.780 [2024-11-19 11:24:45.127303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.780 [2024-11-19 11:24:45.127309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.134440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.042 [2024-11-19 11:24:45.134459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.042 [2024-11-19 11:24:45.134466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.142602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.042 [2024-11-19 11:24:45.142621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.042 [2024-11-19 11:24:45.142627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.153208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.042 [2024-11-19 11:24:45.153226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.042 [2024-11-19 11:24:45.153233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.162545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.042 [2024-11-19 11:24:45.162564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.042 [2024-11-19 11:24:45.162570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.173100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.042 [2024-11-19 11:24:45.173118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.042 [2024-11-19 11:24:45.173129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.181608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.042 [2024-11-19 11:24:45.181626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.042 [2024-11-19 11:24:45.181633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.191494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.042 [2024-11-19 11:24:45.191513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.042 [2024-11-19 11:24:45.191519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.201971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.042 [2024-11-19 11:24:45.201990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.042 [2024-11-19 11:24:45.201997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.042 [2024-11-19 11:24:45.213043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.213062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.213069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.225110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.225128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.225134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.236569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.236587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.236594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.248214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.248233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.248239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.259785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.259803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.259809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.271108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.271130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.271136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.281174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.281192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.281198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.291242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.291259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.291265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.297159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.297176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.297183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.306959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.306976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.306983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.315332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.315349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.315356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.325874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.325891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.325898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.337292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.337310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.337316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.345927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.345945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.356434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.356451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.356458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.364351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.364368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.364374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.374931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.374948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.374955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.043 [2024-11-19 11:24:45.385237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.043 [2024-11-19 11:24:45.385255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.043 [2024-11-19 11:24:45.385261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.395544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.395563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.395569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.404028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.404046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.404052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.413652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.413669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.413676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.423227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.423244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.423251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.433463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.433481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.433490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.443623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.443641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.443647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.454417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.454434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.454440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.465093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.465110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.465117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.475337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.475355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.475361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.484494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.484511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.484517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.493175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.493192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.493199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.503268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.503285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.503291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.515110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.515127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.515133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.525698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.525718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.525725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.536478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.536495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.536502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.546519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.546537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.546543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.556727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.556745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.556751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.566732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.566750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.566756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.576714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.576731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.576737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.588674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.588691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.588697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.598449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.598466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.598473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.610275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.305 [2024-11-19 11:24:45.610292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.305 [2024-11-19 11:24:45.610299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.305 [2024-11-19 11:24:45.621907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.306 [2024-11-19 11:24:45.621924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.306 [2024-11-19 11:24:45.621931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.306 [2024-11-19 11:24:45.632461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.306 [2024-11-19 11:24:45.632479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.306 [2024-11-19 11:24:45.632486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.306 [2024-11-19 11:24:45.644385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.306 [2024-11-19 11:24:45.644404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.306 [2024-11-19 11:24:45.644410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.566 3259.00 IOPS, 407.38 MiB/s [2024-11-19T10:24:45.918Z] [2024-11-19 11:24:45.657677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.657696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.657702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.670805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.670823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.670829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.682000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.682018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.682024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.695647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.695666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.695672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.708779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.708798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.708804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.721476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.721495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.721504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.731498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.731516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.731523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.740469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.740487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.740494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.751415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.751433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.751440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.760321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.760340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.760346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.771287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.771306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.771312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.782390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.782409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.782415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.789713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.789731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.789737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.798417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.798435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.798441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.808799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.808821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.808827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.817612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.817630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.817636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.828401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.828419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.828425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.837608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.837627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.837633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.849136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.849154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.849160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.860309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.860327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.860333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.871110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.871127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.871133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.566 [2024-11-19 11:24:45.880548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.566 [2024-11-19 11:24:45.880566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.566 [2024-11-19 11:24:45.880572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.567 [2024-11-19 11:24:45.891335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.567 [2024-11-19 11:24:45.891353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.567 [2024-11-19 11:24:45.891360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.567 [2024-11-19 11:24:45.902600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.567 [2024-11-19 11:24:45.902618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.567 [2024-11-19 11:24:45.902625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.567 [2024-11-19 11:24:45.911328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.567 [2024-11-19 11:24:45.911345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.567 [2024-11-19 11:24:45.911351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:45.921379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:45.921397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:45.921403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:45.930877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:45.930895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:45.930901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:45.942803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:45.942821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:45.942827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:45.953108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:45.953126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:45.953132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:45.962695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:45.962713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:45.962719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:45.973286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:45.973305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:45.973311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:45.981302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:45.981320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:45.981330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:45.992327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:45.992345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:45.992351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:46.003930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:46.003948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:46.003954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:46.012377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:46.012395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:46.012401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:46.022570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:46.022588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:46.022595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:46.032681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:46.032699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:46.032705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:46.042081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:46.042099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:46.042106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:46.051375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:46.051394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:46.051400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:46.059818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:46.059835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.828 [2024-11-19 11:24:46.059842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.828 [2024-11-19 11:24:46.071916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.828 [2024-11-19 11:24:46.071934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.071940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.081771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.081790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.081796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.091437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.091455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.091462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.102302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.102320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.102327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.110920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.110937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.110943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.117473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.117492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.117498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.129035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.129054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.129060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.140269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.140288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.140294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.147598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.147616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.147626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.157824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.157843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.157849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.168112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.168131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.168137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:37.829 [2024-11-19 11:24:46.177300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:37.829 [2024-11-19 11:24:46.177318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.829 [2024-11-19 11:24:46.177325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.091 [2024-11-19 11:24:46.188769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.091 [2024-11-19 11:24:46.188787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.091 [2024-11-19 11:24:46.188793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.091 [2024-11-19 11:24:46.198644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.091 [2024-11-19 11:24:46.198662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.091 [2024-11-19 11:24:46.198669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.091 [2024-11-19 11:24:46.210634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.091 [2024-11-19 11:24:46.210652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.091 [2024-11-19 11:24:46.210659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.091 [2024-11-19 11:24:46.220380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.091 [2024-11-19 11:24:46.220399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.091 [2024-11-19 11:24:46.220405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.091 [2024-11-19 11:24:46.230440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.091 [2024-11-19 11:24:46.230459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.091 [2024-11-19 11:24:46.230466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.091 [2024-11-19 11:24:46.239643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.091 [2024-11-19 11:24:46.239665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.091 [2024-11-19 11:24:46.239671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.091 [2024-11-19 11:24:46.250245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.091 [2024-11-19 11:24:46.250264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.091 [2024-11-19 11:24:46.250270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.259135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.259154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.259160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.269211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.269229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.269235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.280075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.280093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.280099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.290584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.290603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.290612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.300305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.300324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.300330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.312489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.312508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.312514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.325428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.325447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.325453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.338519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.338537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.338544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.351307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.351326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.351332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.364353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.364371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.364377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.375458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.375478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.375484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.385874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.385893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.385899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.396899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.396918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.396924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.408568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.408586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.408593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.418745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.418764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.418770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.429945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.429964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.429973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.092 [2024-11-19 11:24:46.441764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.092 [2024-11-19 11:24:46.441782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.092 [2024-11-19 11:24:46.441788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.451062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.451081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.451088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.459527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.459546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.459552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.469977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.469995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.470002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.481253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.481272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.481278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.493193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.493211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.493218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.504529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.504548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.504554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.514366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.514385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.514391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.524598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.524620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.524626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.534347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.534366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.534373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.543664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.543682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.543688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.555352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.555371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.555378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.567180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.567199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.354 [2024-11-19 11:24:46.567206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.354 [2024-11-19 11:24:46.575146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.354 [2024-11-19 11:24:46.575165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.355 [2024-11-19 11:24:46.575172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.355 [2024-11-19 11:24:46.586564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.355 [2024-11-19 11:24:46.586583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.355 [2024-11-19 11:24:46.586590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.355 [2024-11-19 11:24:46.597534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.355 [2024-11-19 11:24:46.597552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.355 [2024-11-19 11:24:46.597559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.355 [2024-11-19 11:24:46.607933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.355 [2024-11-19 11:24:46.607951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.355 [2024-11-19 11:24:46.607958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.355 [2024-11-19 11:24:46.618963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.355 [2024-11-19 11:24:46.618982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.355 [2024-11-19 11:24:46.618989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:38.355 [2024-11-19 11:24:46.630133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.355 [2024-11-19 11:24:46.630152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.355 [2024-11-19 11:24:46.630159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:38.355 [2024-11-19 11:24:46.639545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.355 [2024-11-19 11:24:46.639563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.355 [2024-11-19 11:24:46.639570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:38.355 [2024-11-19 11:24:46.649651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe8da40) 00:30:38.355 [2024-11-19 11:24:46.649671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.355 [2024-11-19 11:24:46.649677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:38.355 3119.50 IOPS, 389.94 MiB/s 00:30:38.355 Latency(us) 00:30:38.355 [2024-11-19T10:24:46.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.355 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:38.355 nvme0n1 : 2.00 3120.71 390.09 0.00 0.00 5124.35 948.91 13489.49 00:30:38.355 [2024-11-19T10:24:46.707Z] =================================================================================================================== 00:30:38.355 [2024-11-19T10:24:46.707Z] Total : 3120.71 390.09 0.00 0.00 5124.35 948.91 13489.49 00:30:38.355 { 00:30:38.355 "results": [ 00:30:38.355 { 00:30:38.355 "job": "nvme0n1", 00:30:38.355 "core_mask": "0x2", 00:30:38.355 "workload": "randread", 00:30:38.355 "status": "finished", 00:30:38.355 "queue_depth": 16, 00:30:38.355 "io_size": 131072, 00:30:38.355 "runtime": 2.004352, 00:30:38.355 "iops": 3120.709336483811, 00:30:38.355 "mibps": 390.0886670604764, 00:30:38.355 "io_failed": 0, 00:30:38.355 "io_timeout": 0, 00:30:38.355 "avg_latency_us": 5124.34702051692, 00:30:38.355 "min_latency_us": 948.9066666666666, 00:30:38.355 "max_latency_us": 13489.493333333334 00:30:38.355 } 00:30:38.355 ], 00:30:38.355 "core_count": 1 00:30:38.355 } 00:30:38.355 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:38.355 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:38.355 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:38.355 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:38.355 | .driver_specific 00:30:38.355 | .nvme_error 00:30:38.355 | .status_code 00:30:38.355 | .command_transient_transport_error' 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 130789 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 130789 ']' 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 130789 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 130789 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 130789' 00:30:38.617 killing process with pid 130789 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 130789 00:30:38.617 Received shutdown signal, test time was about 2.000000 seconds 00:30:38.617 00:30:38.617 Latency(us) 00:30:38.617 [2024-11-19T10:24:46.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.617 [2024-11-19T10:24:46.969Z] =================================================================================================================== 00:30:38.617 [2024-11-19T10:24:46.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.617 11:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 130789 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=131614 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 131614 /var/tmp/bperf.sock 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 131614 ']' 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:38.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.878 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:38.878 [2024-11-19 11:24:47.074335] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:38.878 [2024-11-19 11:24:47.074395] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131614 ] 00:30:38.878 [2024-11-19 11:24:47.163817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.878 [2024-11-19 11:24:47.193707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.820 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.820 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:39.820 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:39.820 11:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:39.820 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:39.820 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.820 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:39.820 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.820 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:39.820 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:40.081 nvme0n1 00:30:40.081 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:40.081 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.081 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:40.082 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.082 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:40.082 11:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:40.343 Running I/O for 2 seconds... 00:30:40.343 [2024-11-19 11:24:48.470544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ee5c8 00:30:40.343 [2024-11-19 11:24:48.472451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.472478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.480991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fb8b8 00:30:40.343 [2024-11-19 11:24:48.482209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.482227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.492229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eaab8 00:30:40.343 [2024-11-19 11:24:48.493456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.493473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.505044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e99d8 00:30:40.343 [2024-11-19 11:24:48.506271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.506287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.517048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.343 [2024-11-19 11:24:48.518269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.518285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.529084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e7818 00:30:40.343 [2024-11-19 11:24:48.530306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.530323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.542649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e6738 00:30:40.343 [2024-11-19 11:24:48.544511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.544527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.553070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f8a50 00:30:40.343 [2024-11-19 11:24:48.554323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.554339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.564287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e9e10 00:30:40.343 [2024-11-19 11:24:48.565523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.565539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.576990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f92c0 00:30:40.343 [2024-11-19 11:24:48.578202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.343 [2024-11-19 11:24:48.578217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:40.343 [2024-11-19 11:24:48.588948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e6fa8 00:30:40.344 [2024-11-19 11:24:48.590191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.590207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:40.344 [2024-11-19 11:24:48.600901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e8088 00:30:40.344 [2024-11-19 11:24:48.602110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.602126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:40.344 [2024-11-19 11:24:48.614389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fc128 00:30:40.344 [2024-11-19 11:24:48.616250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.616269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:40.344 [2024-11-19 11:24:48.624722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fb8b8 00:30:40.344 [2024-11-19 11:24:48.625917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.625934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.344 [2024-11-19 11:24:48.635912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.344 [2024-11-19 11:24:48.637120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.637136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:40.344 [2024-11-19 11:24:48.648611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.344 [2024-11-19 11:24:48.649823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.649839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.344 [2024-11-19 11:24:48.660588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.344 [2024-11-19 11:24:48.661813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.661830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.344 [2024-11-19 11:24:48.672557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.344 [2024-11-19 11:24:48.673764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.673780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.344 [2024-11-19 11:24:48.684489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.344 [2024-11-19 11:24:48.685695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.344 [2024-11-19 11:24:48.685712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.605 [2024-11-19 11:24:48.696430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.605 [2024-11-19 11:24:48.697640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.697656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.708337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.709510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.709526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.720303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.721519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.721535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.732244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.733452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.733468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.744187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.745400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.745416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.756109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.757319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.757334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.768027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.769224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.769240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.779931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.781118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.781134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.791884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.793089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.793105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.805358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.807198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.807214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.815747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f96f8 00:30:40.606 [2024-11-19 11:24:48.816952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.816968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.827699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fa7d8 00:30:40.606 [2024-11-19 11:24:48.828897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.828913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.839623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eaef0 00:30:40.606 [2024-11-19 11:24:48.840813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.840828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.851592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fc560 00:30:40.606 [2024-11-19 11:24:48.852761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.852777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.863534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e9168 00:30:40.606 [2024-11-19 11:24:48.864730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.864745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.875522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e8088 00:30:40.606 [2024-11-19 11:24:48.876712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.876729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.889032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e6fa8 00:30:40.606 [2024-11-19 11:24:48.890866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.890882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.899357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e7818 00:30:40.606 [2024-11-19 11:24:48.900551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.900567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.911229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fb048 00:30:40.606 [2024-11-19 11:24:48.912423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.912439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.924704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fc128 00:30:40.606 [2024-11-19 11:24:48.926504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.926522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.935066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.936200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.936216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:40.606 [2024-11-19 11:24:48.947002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:40.606 [2024-11-19 11:24:48.948122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-11-19 11:24:48.948137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:48.958869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fc998 00:30:40.868 [2024-11-19 11:24:48.960013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:48.960029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:48.970817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e84c0 00:30:40.868 [2024-11-19 11:24:48.971990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:48.972005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:48.981995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fd640 00:30:40.868 [2024-11-19 11:24:48.983150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:48.983165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:48.996231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e8088 00:30:40.868 [2024-11-19 11:24:48.998015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:48.998031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.006620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fdeb0 00:30:40.868 [2024-11-19 11:24:49.007780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.007796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.018599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ff3c8 00:30:40.868 [2024-11-19 11:24:49.019765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.019781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.029771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e73e0 00:30:40.868 [2024-11-19 11:24:49.030909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.030925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.042501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e84c0 00:30:40.868 [2024-11-19 11:24:49.043638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.043654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.054460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e95a0 00:30:40.868 [2024-11-19 11:24:49.055601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.055617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.066416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ea680 00:30:40.868 [2024-11-19 11:24:49.067601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.067617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.079960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eb760 00:30:40.868 [2024-11-19 11:24:49.081742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.081758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.089564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ec408 00:30:40.868 [2024-11-19 11:24:49.090859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.090877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.102518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eb328 00:30:40.868 [2024-11-19 11:24:49.103662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.103678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.116073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ea248 00:30:40.868 [2024-11-19 11:24:49.117830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.117845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.127946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e0630 00:30:40.868 [2024-11-19 11:24:49.129710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.129726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.138357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eb328 00:30:40.868 [2024-11-19 11:24:49.139501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.139517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.149546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fe2e8 00:30:40.868 [2024-11-19 11:24:49.150675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.150690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.162258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166feb58 00:30:40.868 [2024-11-19 11:24:49.163384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.163399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.174214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166df988 00:30:40.868 [2024-11-19 11:24:49.175324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.175339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.186198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166de8a8 00:30:40.868 [2024-11-19 11:24:49.187323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.187339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.198160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f1868 00:30:40.868 [2024-11-19 11:24:49.199293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.199308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:40.868 [2024-11-19 11:24:49.211702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f0788 00:30:40.868 [2024-11-19 11:24:49.213456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.868 [2024-11-19 11:24:49.213471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.222111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e9e10 00:30:41.130 [2024-11-19 11:24:49.223251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.223267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.233289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e27f0 00:30:41.130 [2024-11-19 11:24:49.234419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.234437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.246027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eaab8 00:30:41.130 [2024-11-19 11:24:49.247183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.247200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.257960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f0788 00:30:41.130 [2024-11-19 11:24:49.259085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.259101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.271486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e23b8 00:30:41.130 [2024-11-19 11:24:49.273273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.273288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.281834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e95a0 00:30:41.130 [2024-11-19 11:24:49.283013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.283029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.293778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e95a0 00:30:41.130 [2024-11-19 11:24:49.294901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.294916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.305698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e95a0 00:30:41.130 [2024-11-19 11:24:49.306839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.306855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.317637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e95a0 00:30:41.130 [2024-11-19 11:24:49.318771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.318786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.328766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166efae0 00:30:41.130 [2024-11-19 11:24:49.329891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.329906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.341501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eb328 00:30:41.130 [2024-11-19 11:24:49.342639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.342655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.353433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ec408 00:30:41.130 [2024-11-19 11:24:49.354573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.354588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.365400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ed4e8 00:30:41.130 [2024-11-19 11:24:49.366537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.366552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.376564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f0350 00:30:41.130 [2024-11-19 11:24:49.377683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.377699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.389260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f0350 00:30:41.130 [2024-11-19 11:24:49.390382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.390398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.401182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f0350 00:30:41.130 [2024-11-19 11:24:49.402311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.402326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.413105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f0350 00:30:41.130 [2024-11-19 11:24:49.414225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.414241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.425009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f0350 00:30:41.130 [2024-11-19 11:24:49.426127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.130 [2024-11-19 11:24:49.426143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.130 [2024-11-19 11:24:49.436993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ebb98 00:30:41.131 [2024-11-19 11:24:49.438112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.131 [2024-11-19 11:24:49.438127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.131 [2024-11-19 11:24:49.448940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eaab8 00:30:41.131 [2024-11-19 11:24:49.450026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.131 [2024-11-19 11:24:49.450041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.131 21295.00 IOPS, 83.18 MiB/s [2024-11-19T10:24:49.483Z] [2024-11-19 11:24:49.462426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e1710 00:30:41.131 [2024-11-19 11:24:49.464188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.131 [2024-11-19 11:24:49.464203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.131 [2024-11-19 11:24:49.472808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ed920 00:30:41.131 [2024-11-19 11:24:49.473958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.131 [2024-11-19 11:24:49.473974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.484736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eea00 00:30:41.392 [2024-11-19 11:24:49.485871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.485887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.496723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f6890 00:30:41.392 [2024-11-19 11:24:49.497844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.497859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.508668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f0350 00:30:41.392 [2024-11-19 11:24:49.509776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.509792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.522168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e27f0 00:30:41.392 [2024-11-19 11:24:49.523932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.523948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.531780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3e60 00:30:41.392 [2024-11-19 11:24:49.532878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.532894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.544492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f6890 00:30:41.392 [2024-11-19 11:24:49.545637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.545656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.555676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.392 [2024-11-19 11:24:49.556787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.556802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.568351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.392 [2024-11-19 11:24:49.569473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.569489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.580287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.392 [2024-11-19 11:24:49.581418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.581434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.392 [2024-11-19 11:24:49.592211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.392 [2024-11-19 11:24:49.593334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.392 [2024-11-19 11:24:49.593349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.604138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.393 [2024-11-19 11:24:49.605272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.605287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.616026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.393 [2024-11-19 11:24:49.617146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.617163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.627950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.393 [2024-11-19 11:24:49.629042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.629059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.639873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.393 [2024-11-19 11:24:49.640991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.641008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.651804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.393 [2024-11-19 11:24:49.652941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.652957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.663722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.393 [2024-11-19 11:24:49.664833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.664849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.675632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.393 [2024-11-19 11:24:49.676768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.676783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.689059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f35f0 00:30:41.393 [2024-11-19 11:24:49.690884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.690900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.699618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ed0b0 00:30:41.393 [2024-11-19 11:24:49.700733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.700749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.711576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166ebfd0 00:30:41.393 [2024-11-19 11:24:49.712709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.712725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.723510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eaef0 00:30:41.393 [2024-11-19 11:24:49.724597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.724613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.393 [2024-11-19 11:24:49.735473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e9e10 00:30:41.393 [2024-11-19 11:24:49.736595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.393 [2024-11-19 11:24:49.736611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.655 [2024-11-19 11:24:49.747432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e8d30 00:30:41.655 [2024-11-19 11:24:49.748553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.655 [2024-11-19 11:24:49.748568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.655 [2024-11-19 11:24:49.759388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e7c50 00:30:41.655 [2024-11-19 11:24:49.760507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.655 [2024-11-19 11:24:49.760523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.655 [2024-11-19 11:24:49.771357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fb048 00:30:41.655 [2024-11-19 11:24:49.772474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.655 [2024-11-19 11:24:49.772489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.655 [2024-11-19 11:24:49.783334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fc128 00:30:41.655 [2024-11-19 11:24:49.784467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.784483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.795286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fa3a0 00:30:41.656 [2024-11-19 11:24:49.796408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.796424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.807285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f6458 00:30:41.656 [2024-11-19 11:24:49.808397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.808413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.819300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e4de8 00:30:41.656 [2024-11-19 11:24:49.820420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.820436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.831301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e0ea0 00:30:41.656 [2024-11-19 11:24:49.832407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.832424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.843258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e4140 00:30:41.656 [2024-11-19 11:24:49.844357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.844373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.856787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e1710 00:30:41.656 [2024-11-19 11:24:49.858576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.858594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.867169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166efae0 00:30:41.656 [2024-11-19 11:24:49.868289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.868305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.879136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3e60 00:30:41.656 [2024-11-19 11:24:49.880240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.880256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.891115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166de038 00:30:41.656 [2024-11-19 11:24:49.892222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.892239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.904590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f7970 00:30:41.656 [2024-11-19 11:24:49.906335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.906351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.914996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fc560 00:30:41.656 [2024-11-19 11:24:49.916062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.916078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.926973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e5a90 00:30:41.656 [2024-11-19 11:24:49.928070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.928086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.938944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e7818 00:30:41.656 [2024-11-19 11:24:49.940076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.940092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.950109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e12d8 00:30:41.656 [2024-11-19 11:24:49.951182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.951199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.962855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:41.656 [2024-11-19 11:24:49.963949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.963965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.974803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f5be8 00:30:41.656 [2024-11-19 11:24:49.975882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.975897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.988323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f4b08 00:30:41.656 [2024-11-19 11:24:49.990026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.990042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.656 [2024-11-19 11:24:49.997870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e6300 00:30:41.656 [2024-11-19 11:24:49.998921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.656 [2024-11-19 11:24:49.998937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.917 [2024-11-19 11:24:50.012659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f2948 00:30:41.917 [2024-11-19 11:24:50.014378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.917 [2024-11-19 11:24:50.014394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.917 [2024-11-19 11:24:50.023067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:41.917 [2024-11-19 11:24:50.024132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.917 [2024-11-19 11:24:50.024148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:41.917 [2024-11-19 11:24:50.035008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:41.917 [2024-11-19 11:24:50.036050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.917 [2024-11-19 11:24:50.036067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.046986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:41.918 [2024-11-19 11:24:50.048059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.048076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.058923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:41.918 [2024-11-19 11:24:50.060000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.060016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.070055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e3060 00:30:41.918 [2024-11-19 11:24:50.071101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.071118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.082805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166de470 00:30:41.918 [2024-11-19 11:24:50.083830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.083847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.096556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f7da8 00:30:41.918 [2024-11-19 11:24:50.098321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.098336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.106902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f4b08 00:30:41.918 [2024-11-19 11:24:50.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.107949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.118904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e88f8 00:30:41.918 [2024-11-19 11:24:50.119948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.119964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.130834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e7818 00:30:41.918 [2024-11-19 11:24:50.131865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.131882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.142834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e6fa8 00:30:41.918 [2024-11-19 11:24:50.143868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.143884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.154766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fc998 00:30:41.918 [2024-11-19 11:24:50.155817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.155833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.166731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f4b08 00:30:41.918 [2024-11-19 11:24:50.167800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.167819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.178682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e7818 00:30:41.918 [2024-11-19 11:24:50.179739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.179754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.189786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166de470 00:30:41.918 [2024-11-19 11:24:50.190829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.190845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.202530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166de470 00:30:41.918 [2024-11-19 11:24:50.203546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.203561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.214429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e1f80 00:30:41.918 [2024-11-19 11:24:50.215489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.215506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.225617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166eaab8 00:30:41.918 [2024-11-19 11:24:50.226646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.226662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.238358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fd640 00:30:41.918 [2024-11-19 11:24:50.239394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.239410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.250315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fb048 00:30:41.918 [2024-11-19 11:24:50.251372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.251388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:41.918 [2024-11-19 11:24:50.261585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:41.918 [2024-11-19 11:24:50.262610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:41.918 [2024-11-19 11:24:50.262626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:42.180 [2024-11-19 11:24:50.274283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:42.180 [2024-11-19 11:24:50.275319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.180 [2024-11-19 11:24:50.275335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:42.180 [2024-11-19 11:24:50.286206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:42.180 [2024-11-19 11:24:50.287255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.180 [2024-11-19 11:24:50.287271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:42.180 [2024-11-19 11:24:50.298140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:42.180 [2024-11-19 11:24:50.299153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.180 [2024-11-19 11:24:50.299169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:42.180 [2024-11-19 11:24:50.310063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:42.180 [2024-11-19 11:24:50.311103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.180 [2024-11-19 11:24:50.311119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:42.180 [2024-11-19 11:24:50.322006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:42.180 [2024-11-19 11:24:50.323036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.180 [2024-11-19 11:24:50.323052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:42.180 [2024-11-19 11:24:50.333913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:42.180 [2024-11-19 11:24:50.334928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.180 [2024-11-19 11:24:50.334944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:42.180 [2024-11-19 11:24:50.347389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f3a28 00:30:42.180 [2024-11-19 11:24:50.349028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.180 [2024-11-19 11:24:50.349044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:42.180 [2024-11-19 11:24:50.357792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166f4298 00:30:42.180 [2024-11-19 11:24:50.358825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.180 [2024-11-19 11:24:50.358841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:42.181 [2024-11-19 11:24:50.368972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166fd640 00:30:42.181 [2024-11-19 11:24:50.369986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.370002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:42.181 [2024-11-19 11:24:50.383249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e5ec8 00:30:42.181 [2024-11-19 11:24:50.384910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.384926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:42.181 [2024-11-19 11:24:50.395174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e5a90 00:30:42.181 [2024-11-19 11:24:50.396824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.396840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:42.181 [2024-11-19 11:24:50.407039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e6738 00:30:42.181 [2024-11-19 11:24:50.408677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.408693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:42.181 [2024-11-19 11:24:50.417423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e7818 00:30:42.181 [2024-11-19 11:24:50.418401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.418417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:42.181 [2024-11-19 11:24:50.429331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e27f0 00:30:42.181 [2024-11-19 11:24:50.430335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.430351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:42.181 [2024-11-19 11:24:50.441373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e27f0 00:30:42.181 [2024-11-19 11:24:50.442374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.442390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:42.181 [2024-11-19 11:24:50.453289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e27f0 00:30:42.181 [2024-11-19 11:24:50.454257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.454273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:42.181 21351.50 IOPS, 83.40 MiB/s [2024-11-19T10:24:50.533Z] [2024-11-19 11:24:50.465139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18869c0) with pdu=0x2000166e8088 00:30:42.181 [2024-11-19 11:24:50.466090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.181 [2024-11-19 11:24:50.466105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:42.181 00:30:42.181 Latency(us) 00:30:42.181 [2024-11-19T10:24:50.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.181 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:42.181 nvme0n1 : 2.01 21349.81 83.40 0.00 0.00 5986.47 2034.35 14090.24 00:30:42.181 [2024-11-19T10:24:50.533Z] =================================================================================================================== 00:30:42.181 [2024-11-19T10:24:50.533Z] Total : 21349.81 83.40 0.00 0.00 5986.47 2034.35 14090.24 00:30:42.181 { 00:30:42.181 "results": [ 00:30:42.181 { 00:30:42.181 "job": "nvme0n1", 00:30:42.181 "core_mask": "0x2", 00:30:42.181 "workload": "randwrite", 00:30:42.181 "status": "finished", 00:30:42.181 "queue_depth": 128, 00:30:42.181 "io_size": 4096, 00:30:42.181 "runtime": 2.006154, 00:30:42.181 "iops": 21349.806644953478, 00:30:42.181 "mibps": 83.39768220684952, 00:30:42.181 "io_failed": 0, 00:30:42.181 "io_timeout": 0, 00:30:42.181 "avg_latency_us": 5986.466430389204, 00:30:42.181 "min_latency_us": 2034.3466666666666, 00:30:42.181 "max_latency_us": 14090.24 00:30:42.181 } 00:30:42.181 ], 00:30:42.181 "core_count": 1 00:30:42.181 } 00:30:42.181 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:42.181 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:42.181 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:42.181 | .driver_specific 00:30:42.181 | .nvme_error 00:30:42.181 | .status_code 00:30:42.181 | .command_transient_transport_error' 00:30:42.181 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 131614 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 131614 ']' 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 131614 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 131614 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 131614' 00:30:42.443 killing process with pid 131614 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 131614 00:30:42.443 Received shutdown signal, test time was about 2.000000 seconds 00:30:42.443 00:30:42.443 Latency(us) 00:30:42.443 [2024-11-19T10:24:50.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.443 [2024-11-19T10:24:50.795Z] =================================================================================================================== 00:30:42.443 [2024-11-19T10:24:50.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.443 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 131614 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=132299 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 132299 /var/tmp/bperf.sock 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 132299 ']' 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:42.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.705 11:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:42.705 [2024-11-19 11:24:50.908175] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:42.705 [2024-11-19 11:24:50.908228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132299 ] 00:30:42.705 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:42.705 Zero copy mechanism will not be used. 00:30:42.705 [2024-11-19 11:24:50.997662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.705 [2024-11-19 11:24:51.025905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:43.648 11:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:43.910 nvme0n1 00:30:43.910 11:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:43.910 11:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.910 11:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:43.910 11:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.910 11:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:43.910 11:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:44.178 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:44.178 Zero copy mechanism will not be used. 00:30:44.178 Running I/O for 2 seconds... 00:30:44.178 [2024-11-19 11:24:52.310547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.310859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.310893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.322320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.322614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.322633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.332417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.332656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.332673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.343514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.343585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.343601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.354343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.354604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.354621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.365290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.365355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.365370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.376206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.376418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.376434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.386653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.386977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.386995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.394516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.394591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.394606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.403777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.403870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.403886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.412398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.412635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.412651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.420310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.420387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.420403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.430421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.430498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.430514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.440014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.440083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.440099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.447692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.447772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.447787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.457405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.457481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.457497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.178 [2024-11-19 11:24:52.464317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.178 [2024-11-19 11:24:52.464376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-11-19 11:24:52.464391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.471697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.471767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.471783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.479966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.480184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.480200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.486922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.487018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.487034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.494683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.494849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.494870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.503146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.503375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.503391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.508679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.509090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.509107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.515656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.515932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.515949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.523991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.524228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.524244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.179 [2024-11-19 11:24:52.528031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.179 [2024-11-19 11:24:52.528212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-11-19 11:24:52.528232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.442 [2024-11-19 11:24:52.531982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.442 [2024-11-19 11:24:52.532204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.532220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.536031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.536221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.536237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.540405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.540612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.540628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.548029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.548358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.548375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.557044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.557419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.557436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.568472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.568764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.568779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.578498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.578751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.578766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.585034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.585224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.589116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.589288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.589304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.595963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.596282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.596299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.602043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.602398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.602415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.610319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.610442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.610458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.615792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.615973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.615989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.621906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.622067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.622082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.630350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.630600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.630615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.636620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.636848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.636868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.644353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.644676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.644692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.650298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.650486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.650501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.658423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.658662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.658677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.664738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.664921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.664937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.672781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.443 [2024-11-19 11:24:52.672963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.443 [2024-11-19 11:24:52.672979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.443 [2024-11-19 11:24:52.678991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.679198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.679214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.688049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.688369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.688385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.695327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.695615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.695631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.703843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.704030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.704046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.711586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.711762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.711781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.720026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.720285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.720300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.728478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.728780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.728797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.736240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.736522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.736538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.741661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.741832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.741848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.747605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.747765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.747781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.756409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.756654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.756669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.764304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.764558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.764573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.771787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.771966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.771982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.776248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.776426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.776442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.780289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.780458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.780474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.784538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.784844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.784860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.444 [2024-11-19 11:24:52.789164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.444 [2024-11-19 11:24:52.789340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.444 [2024-11-19 11:24:52.789356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.792969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.793144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.793159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.796626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.796802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.796817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.802430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.802752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.802768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.809274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.809541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.809558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.818721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.818969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.818984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.825607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.825840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.825856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.830506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.830803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.830820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.834698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.834895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.834911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.841424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.841691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.841708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.847464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.847641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.847656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.853574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.853877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.853893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.857666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.857834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.857850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.861901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.862139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.862154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.869364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.869553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.869572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.874121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.874299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.874314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.878362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.878532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.878547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.883808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.883993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.708 [2024-11-19 11:24:52.884008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.708 [2024-11-19 11:24:52.888010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.708 [2024-11-19 11:24:52.888177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.888193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.892337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.892502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.892518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.896172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.896343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.896358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.899763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.899941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.899957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.903637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.903936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.903953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.908188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.908478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.908494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.912891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.913066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.913082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.918222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.918544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.918561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.922818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.923093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.923109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.928536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.928724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.928740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.932468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.932642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.932658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.936214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.936384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.936400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.940007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.940181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.940197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.943825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.944005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.944021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.947727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.947903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.947919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.951347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.951521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.951537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.955247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.955421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.955436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.958854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.959040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.959055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.963280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.963590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.963607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.967097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.967269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.967285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.970704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.970887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.970903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.974280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.974492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.974507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.978621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.978827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.978846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.982254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.982426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.982442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.987602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.987720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.987736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.994221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.994434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.994449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:52.999435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.709 [2024-11-19 11:24:52.999676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.709 [2024-11-19 11:24:52.999692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.709 [2024-11-19 11:24:53.009225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.009375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.009391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.013381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.013537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.013552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.019795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.019962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.019978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.023443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.023600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.023616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.027065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.027274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.027289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.031210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.031370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.031386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.037709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.038000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.038015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.043022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.043260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.043275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.050958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.051141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.051156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.710 [2024-11-19 11:24:53.055766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.710 [2024-11-19 11:24:53.056035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.710 [2024-11-19 11:24:53.056051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.060109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.060277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.060293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.064105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.064264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.064280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.068204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.068496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.068512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.072150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.072321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.072337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.075670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.075835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.075850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.080172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.080339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.080354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.083849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.084008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.084023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.087635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.087795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.087811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.094210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.094385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.094400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.100811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.101093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.101118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.110559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.110967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.110983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.121325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.121548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.121567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.131636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.131919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.131935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.141861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.142297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.142314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.151903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.152218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.152235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.162576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.162827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.162843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.172099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.172194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.172209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.177986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.178154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.178170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.184408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.184569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.973 [2024-11-19 11:24:53.184585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.973 [2024-11-19 11:24:53.192023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.973 [2024-11-19 11:24:53.192193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.192209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.200030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.200248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.200264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.205493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.205706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.205722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.211507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.211796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.211813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.217621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.217779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.217795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.223873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.224135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.224151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.230448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.230748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.230764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.239079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.239281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.239297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.246557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.246814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.246830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.253157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.253477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.253493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.260103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.260272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.260288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.266642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.266960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.266977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.276146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.276455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.276471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.283330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.283622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.283646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.290084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.290433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.290449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.294529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.294698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.294713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.299021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.299186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.299202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.974 4772.00 IOPS, 596.50 MiB/s [2024-11-19T10:24:53.326Z] [2024-11-19 11:24:53.308016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.308188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.308204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.313307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.313476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.313496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.974 [2024-11-19 11:24:53.319395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:44.974 [2024-11-19 11:24:53.319668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.974 [2024-11-19 11:24:53.319684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.237 [2024-11-19 11:24:53.325234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.237 [2024-11-19 11:24:53.325580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.237 [2024-11-19 11:24:53.325598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.237 [2024-11-19 11:24:53.331499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.237 [2024-11-19 11:24:53.331790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.237 [2024-11-19 11:24:53.331808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.237 [2024-11-19 11:24:53.338945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.237 [2024-11-19 11:24:53.339251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.339268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.345015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.345276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.345298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.349955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.350127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.350143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.356983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.357154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.357171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.365511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.365793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.365812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.372656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.372828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.372845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.378467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.378743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.378759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.385119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.385417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.385442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.392325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.392608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.392632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.397787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.397960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.397977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.405631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.405965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.405982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.414310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.414564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.414581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.424793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.425064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.425080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.435212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.435519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.435537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.446660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.446891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.446908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.457341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.457678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.457695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.464126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.464315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.464331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.469507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.469678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.469694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.474795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.474973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.474990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.481401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.481673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.481690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.488497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.488831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.488848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.496472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.496757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.496774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.502611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.502932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.502953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.508067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.508237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.508253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.515091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.515263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.515280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.521393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.521667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.521690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.527720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.238 [2024-11-19 11:24:53.528097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.238 [2024-11-19 11:24:53.528114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.238 [2024-11-19 11:24:53.535470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.239 [2024-11-19 11:24:53.535753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.239 [2024-11-19 11:24:53.535769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.239 [2024-11-19 11:24:53.541693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.239 [2024-11-19 11:24:53.541872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.239 [2024-11-19 11:24:53.541888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.239 [2024-11-19 11:24:53.551083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.239 [2024-11-19 11:24:53.551288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.239 [2024-11-19 11:24:53.551304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.239 [2024-11-19 11:24:53.560640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.239 [2024-11-19 11:24:53.560831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.239 [2024-11-19 11:24:53.560847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.239 [2024-11-19 11:24:53.570138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.239 [2024-11-19 11:24:53.570440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.239 [2024-11-19 11:24:53.570456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.239 [2024-11-19 11:24:53.580623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.239 [2024-11-19 11:24:53.580923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.239 [2024-11-19 11:24:53.580938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.239 [2024-11-19 11:24:53.587588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.239 [2024-11-19 11:24:53.587833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.239 [2024-11-19 11:24:53.587850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.594975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.595135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.595152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.603292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.603455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.603471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.611281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.611555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.611571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.619301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.619704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.619722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.629098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.629386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.629403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.636529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.636873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.636890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.643242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.643536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.643553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.650628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.650952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.650968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.657254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.657561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.657578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.663123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.663289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.663306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.668583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.668744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.668760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.674007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.674184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.501 [2024-11-19 11:24:53.674201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.501 [2024-11-19 11:24:53.681467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.501 [2024-11-19 11:24:53.681734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.681752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.692658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.692944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.692961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.702990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.703257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.703277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.713199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.713463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.713480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.722684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.723044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.723062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.731438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.731648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.731664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.741018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.741180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.741196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.751839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.752051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.752068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.760454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.760664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.760680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.769749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.769980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.769996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.780126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.780588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.780605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.784625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.784792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.784809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.788841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.789047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.789063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.795980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.796318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.796336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.804522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.804719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.804735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.809038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.809239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.809255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.816649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.816963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.816980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.824789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.825092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.825109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.832354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.832518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.832534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.836737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.836920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.836936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.840609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.840764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.840779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.502 [2024-11-19 11:24:53.847073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.502 [2024-11-19 11:24:53.847235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.502 [2024-11-19 11:24:53.847250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.853412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.853627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.853643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.856851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.857010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.857026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.861917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.862292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.862308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.866390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.866546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.866562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.871614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.871789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.871805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.876640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.876795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.876811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.880032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.880185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.880204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.883415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.883568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.883584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.886795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.886952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.886968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.893068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.893357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.893374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.899406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.899725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.899741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.905859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.906021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.906038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.911941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.765 [2024-11-19 11:24:53.912293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.765 [2024-11-19 11:24:53.912310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.765 [2024-11-19 11:24:53.918611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.918778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.918794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.924639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.924898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.924914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.931139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.931419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.931444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.935957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.936113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.936129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.941457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.941683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.941699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.949145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.949352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.949368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.956425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.956747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.960562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.960912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.960928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.968563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.968822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.968838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.975040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.975325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.975341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.981726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.982011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.982027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.988250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.988440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.988456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:53.995165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:53.995347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:53.995363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.001879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.002139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.002154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.009816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.009987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.010003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.019098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.019322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.019338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.026850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.027039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.027055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.034259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.034561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.034578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.042939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.043334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.043352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.053305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.053475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.053494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.060527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.060851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.060872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.067626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.067915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.067930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.074177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.074487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.074504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.078218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.078370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.078386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.083400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.083653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.083668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.090418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.090768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.090786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.094606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.766 [2024-11-19 11:24:54.094807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.766 [2024-11-19 11:24:54.094823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.766 [2024-11-19 11:24:54.098612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.767 [2024-11-19 11:24:54.098760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.767 [2024-11-19 11:24:54.098776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:45.767 [2024-11-19 11:24:54.101986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.767 [2024-11-19 11:24:54.102142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.767 [2024-11-19 11:24:54.102158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:45.767 [2024-11-19 11:24:54.105599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.767 [2024-11-19 11:24:54.105752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.767 [2024-11-19 11:24:54.105768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:45.767 [2024-11-19 11:24:54.109995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.767 [2024-11-19 11:24:54.110151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.767 [2024-11-19 11:24:54.110167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:45.767 [2024-11-19 11:24:54.113328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:45.767 [2024-11-19 11:24:54.113490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.767 [2024-11-19 11:24:54.113506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.029 [2024-11-19 11:24:54.118336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.029 [2024-11-19 11:24:54.118635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.029 [2024-11-19 11:24:54.118651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.029 [2024-11-19 11:24:54.123052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.029 [2024-11-19 11:24:54.123207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.029 [2024-11-19 11:24:54.123223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.029 [2024-11-19 11:24:54.127369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.029 [2024-11-19 11:24:54.127696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.029 [2024-11-19 11:24:54.127713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.029 [2024-11-19 11:24:54.134158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.029 [2024-11-19 11:24:54.134477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.029 [2024-11-19 11:24:54.134494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.029 [2024-11-19 11:24:54.140268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.140421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.140436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.144067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.144220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.144236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.148895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.149043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.149059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.153847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.154001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.154016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.157283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.157428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.157443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.160843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.161021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.161037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.165654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.165797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.165813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.169173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.169319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.169335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.172588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.172732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.172748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.175941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.176084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.176103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.179251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.179395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.179411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.182524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.182667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.182683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.186935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.187186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.187202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.190997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.191311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.191328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.195678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.195815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.195831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.203358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.203698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.203715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.208189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.208515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.208532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.214502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.214641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.214657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.223043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.223303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.223319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.230304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.230590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.230607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.236626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.236932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.236950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.246666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.246939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.246956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.255043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.255294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.255310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.261658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.262002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.262019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.269432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.269575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.269591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.274512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.274659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.030 [2024-11-19 11:24:54.274675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.030 [2024-11-19 11:24:54.281175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.030 [2024-11-19 11:24:54.281537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.031 [2024-11-19 11:24:54.281554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.031 [2024-11-19 11:24:54.286291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.031 [2024-11-19 11:24:54.286435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.031 [2024-11-19 11:24:54.286451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:46.031 [2024-11-19 11:24:54.291634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.031 [2024-11-19 11:24:54.291975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.031 [2024-11-19 11:24:54.291992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:46.031 [2024-11-19 11:24:54.297706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.031 [2024-11-19 11:24:54.297975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.031 [2024-11-19 11:24:54.297999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:46.031 [2024-11-19 11:24:54.303444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1886d00) with pdu=0x2000166ff3c8 00:30:46.031 [2024-11-19 11:24:54.303728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.031 [2024-11-19 11:24:54.303744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:46.031 4756.00 IOPS, 594.50 MiB/s 00:30:46.031 Latency(us) 00:30:46.031 [2024-11-19T10:24:54.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.031 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:46.031 nvme0n1 : 2.00 4757.63 594.70 0.00 0.00 3359.02 1576.96 12014.93 00:30:46.031 [2024-11-19T10:24:54.383Z] =================================================================================================================== 00:30:46.031 [2024-11-19T10:24:54.383Z] Total : 4757.63 594.70 0.00 0.00 3359.02 1576.96 12014.93 00:30:46.031 { 00:30:46.031 "results": [ 00:30:46.031 { 00:30:46.031 "job": "nvme0n1", 00:30:46.031 "core_mask": "0x2", 00:30:46.031 "workload": "randwrite", 00:30:46.031 "status": "finished", 00:30:46.031 "queue_depth": 16, 00:30:46.031 "io_size": 131072, 00:30:46.031 "runtime": 2.003727, 00:30:46.031 "iops": 4757.634148763778, 00:30:46.031 "mibps": 594.7042685954723, 00:30:46.031 "io_failed": 0, 00:30:46.031 "io_timeout": 0, 00:30:46.031 "avg_latency_us": 3359.0150368894015, 00:30:46.031 "min_latency_us": 1576.96, 00:30:46.031 "max_latency_us": 12014.933333333332 00:30:46.031 } 00:30:46.031 ], 00:30:46.031 "core_count": 1 00:30:46.031 } 00:30:46.031 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:46.031 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:46.031 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:46.031 | .driver_specific 00:30:46.031 | .nvme_error 00:30:46.031 | .status_code 00:30:46.031 | .command_transient_transport_error' 00:30:46.031 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 307 > 0 )) 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 132299 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 132299 ']' 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 132299 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132299 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132299' 00:30:46.292 killing process with pid 132299 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 132299 00:30:46.292 Received shutdown signal, test time was about 2.000000 seconds 00:30:46.292 00:30:46.292 Latency(us) 00:30:46.292 [2024-11-19T10:24:54.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.292 [2024-11-19T10:24:54.644Z] =================================================================================================================== 00:30:46.292 [2024-11-19T10:24:54.644Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:46.292 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 132299 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 129903 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 129903 ']' 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 129903 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129903 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129903' 00:30:46.553 killing process with pid 129903 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 129903 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 129903 00:30:46.553 00:30:46.553 real 0m16.514s 00:30:46.553 user 0m32.833s 00:30:46.553 sys 0m3.508s 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.553 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:46.553 ************************************ 00:30:46.553 END TEST nvmf_digest_error 00:30:46.553 ************************************ 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:46.814 rmmod nvme_tcp 00:30:46.814 rmmod nvme_fabrics 00:30:46.814 rmmod nvme_keyring 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 129903 ']' 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 129903 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 129903 ']' 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 129903 00:30:46.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (129903) - No such process 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 129903 is not found' 00:30:46.814 Process with pid 129903 is not found 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.814 11:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.814 11:24:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.814 11:24:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.814 11:24:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.814 11:24:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.814 11:24:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.729 11:24:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.729 00:30:48.729 real 0m42.741s 00:30:48.729 user 1m6.213s 00:30:48.729 sys 0m13.157s 00:30:48.729 11:24:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.729 11:24:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:48.729 ************************************ 00:30:48.729 END TEST nvmf_digest 00:30:48.729 ************************************ 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.028 ************************************ 00:30:49.028 START TEST nvmf_bdevperf 00:30:49.028 ************************************ 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:49.028 * Looking for test storage... 00:30:49.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:49.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.028 --rc genhtml_branch_coverage=1 00:30:49.028 --rc genhtml_function_coverage=1 00:30:49.028 --rc genhtml_legend=1 00:30:49.028 --rc geninfo_all_blocks=1 00:30:49.028 --rc geninfo_unexecuted_blocks=1 00:30:49.028 00:30:49.028 ' 00:30:49.028 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:49.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.028 --rc genhtml_branch_coverage=1 00:30:49.028 --rc genhtml_function_coverage=1 00:30:49.028 --rc genhtml_legend=1 00:30:49.028 --rc geninfo_all_blocks=1 00:30:49.028 --rc geninfo_unexecuted_blocks=1 00:30:49.028 00:30:49.029 ' 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:49.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.029 --rc genhtml_branch_coverage=1 00:30:49.029 --rc genhtml_function_coverage=1 00:30:49.029 --rc genhtml_legend=1 00:30:49.029 --rc geninfo_all_blocks=1 00:30:49.029 --rc geninfo_unexecuted_blocks=1 00:30:49.029 00:30:49.029 ' 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:49.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.029 --rc genhtml_branch_coverage=1 00:30:49.029 --rc genhtml_function_coverage=1 00:30:49.029 --rc genhtml_legend=1 00:30:49.029 --rc geninfo_all_blocks=1 00:30:49.029 --rc geninfo_unexecuted_blocks=1 00:30:49.029 00:30:49.029 ' 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.029 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:49.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.352 11:24:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:57.495 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:57.495 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:57.495 Found net devices under 0000:31:00.0: cvl_0_0 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.495 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:57.496 Found net devices under 0000:31:00.1: cvl_0_1 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:57.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:30:57.496 00:30:57.496 --- 10.0.0.2 ping statistics --- 00:30:57.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.496 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:30:57.496 00:30:57.496 --- 10.0.0.1 ping statistics --- 00:30:57.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.496 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=137683 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 137683 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 137683 ']' 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.496 11:25:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:57.496 [2024-11-19 11:25:05.604499] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:57.496 [2024-11-19 11:25:05.604561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.496 [2024-11-19 11:25:05.712375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:57.496 [2024-11-19 11:25:05.765692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.496 [2024-11-19 11:25:05.765742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.496 [2024-11-19 11:25:05.765755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.496 [2024-11-19 11:25:05.765765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.496 [2024-11-19 11:25:05.765772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.496 [2024-11-19 11:25:05.767456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.496 [2024-11-19 11:25:05.767620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.496 [2024-11-19 11:25:05.767621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.069 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.069 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:58.069 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:58.069 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:58.069 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.330 [2024-11-19 11:25:06.439396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.330 Malloc0 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.330 [2024-11-19 11:25:06.506957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:58.330 { 00:30:58.330 "params": { 00:30:58.330 "name": "Nvme$subsystem", 00:30:58.330 "trtype": "$TEST_TRANSPORT", 00:30:58.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.330 "adrfam": "ipv4", 00:30:58.330 "trsvcid": "$NVMF_PORT", 00:30:58.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.330 "hdgst": ${hdgst:-false}, 00:30:58.330 "ddgst": ${ddgst:-false} 00:30:58.330 }, 00:30:58.330 "method": "bdev_nvme_attach_controller" 00:30:58.330 } 00:30:58.330 EOF 00:30:58.330 )") 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:58.330 11:25:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:58.330 "params": { 00:30:58.330 "name": "Nvme1", 00:30:58.330 "trtype": "tcp", 00:30:58.330 "traddr": "10.0.0.2", 00:30:58.330 "adrfam": "ipv4", 00:30:58.330 "trsvcid": "4420", 00:30:58.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:58.331 "hdgst": false, 00:30:58.331 "ddgst": false 00:30:58.331 }, 00:30:58.331 "method": "bdev_nvme_attach_controller" 00:30:58.331 }' 00:30:58.331 [2024-11-19 11:25:06.562754] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:58.331 [2024-11-19 11:25:06.562810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138030 ] 00:30:58.331 [2024-11-19 11:25:06.640431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.331 [2024-11-19 11:25:06.676738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.902 Running I/O for 1 seconds... 00:30:59.845 9047.00 IOPS, 35.34 MiB/s 00:30:59.845 Latency(us) 00:30:59.845 [2024-11-19T10:25:08.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.845 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:59.845 Verification LBA range: start 0x0 length 0x4000 00:30:59.845 Nvme1n1 : 1.01 9086.08 35.49 0.00 0.00 14026.93 1952.43 12014.93 00:30:59.845 [2024-11-19T10:25:08.197Z] =================================================================================================================== 00:30:59.845 [2024-11-19T10:25:08.198Z] Total : 9086.08 35.49 0.00 0.00 14026.93 1952.43 12014.93 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=138283 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:59.846 { 00:30:59.846 "params": { 00:30:59.846 "name": "Nvme$subsystem", 00:30:59.846 "trtype": "$TEST_TRANSPORT", 00:30:59.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:59.846 "adrfam": "ipv4", 00:30:59.846 "trsvcid": "$NVMF_PORT", 00:30:59.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:59.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:59.846 "hdgst": ${hdgst:-false}, 00:30:59.846 "ddgst": ${ddgst:-false} 00:30:59.846 }, 00:30:59.846 "method": "bdev_nvme_attach_controller" 00:30:59.846 } 00:30:59.846 EOF 00:30:59.846 )") 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:59.846 11:25:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:59.846 "params": { 00:30:59.846 "name": "Nvme1", 00:30:59.846 "trtype": "tcp", 00:30:59.846 "traddr": "10.0.0.2", 00:30:59.846 "adrfam": "ipv4", 00:30:59.846 "trsvcid": "4420", 00:30:59.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:59.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:59.846 "hdgst": false, 00:30:59.846 "ddgst": false 00:30:59.846 }, 00:30:59.846 "method": "bdev_nvme_attach_controller" 00:30:59.846 }' 00:30:59.846 [2024-11-19 11:25:08.161181] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:30:59.846 [2024-11-19 11:25:08.161235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138283 ] 00:31:00.106 [2024-11-19 11:25:08.239036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.107 [2024-11-19 11:25:08.274822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.107 Running I/O for 15 seconds... 00:31:02.432 9547.00 IOPS, 37.29 MiB/s [2024-11-19T10:25:11.361Z] 10337.50 IOPS, 40.38 MiB/s [2024-11-19T10:25:11.361Z] 11:25:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 137683 00:31:03.009 11:25:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:03.009 [2024-11-19 11:25:11.127577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.009 [2024-11-19 11:25:11.127620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.009 [2024-11-19 11:25:11.127652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.009 [2024-11-19 11:25:11.127673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.009 [2024-11-19 11:25:11.127694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.009 [2024-11-19 11:25:11.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.009 [2024-11-19 11:25:11.127732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.009 [2024-11-19 11:25:11.127752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.009 [2024-11-19 11:25:11.127770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.127985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.127996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.009 [2024-11-19 11:25:11.128161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.009 [2024-11-19 11:25:11.128170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.010 [2024-11-19 11:25:11.128264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.010 [2024-11-19 11:25:11.128283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.010 [2024-11-19 11:25:11.128300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.010 [2024-11-19 11:25:11.128317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.010 [2024-11-19 11:25:11.128334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.010 [2024-11-19 11:25:11.128350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.010 [2024-11-19 11:25:11.128775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.010 [2024-11-19 11:25:11.128842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.010 [2024-11-19 11:25:11.128852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.128859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.128874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.128882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.128891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.128898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.128907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.128915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.128924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.128932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.128941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.128950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.128959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.128967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.128976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.128983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.128993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.011 [2024-11-19 11:25:11.129526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.011 [2024-11-19 11:25:11.129533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.012 [2024-11-19 11:25:11.129764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.012 [2024-11-19 11:25:11.129890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.129899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91eb50 is same with the state(6) to be set 00:31:03.012 [2024-11-19 11:25:11.129908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:03.012 [2024-11-19 11:25:11.129915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:03.012 [2024-11-19 11:25:11.129921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98728 len:8 PRP1 0x0 PRP2 0x0 00:31:03.012 [2024-11-19 11:25:11.129929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.012 [2024-11-19 11:25:11.133504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.012 [2024-11-19 11:25:11.133555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.012 [2024-11-19 11:25:11.134213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.012 [2024-11-19 11:25:11.134232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.012 [2024-11-19 11:25:11.134240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.012 [2024-11-19 11:25:11.134461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.012 [2024-11-19 11:25:11.134680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.012 [2024-11-19 11:25:11.134689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.012 [2024-11-19 11:25:11.134698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.012 [2024-11-19 11:25:11.134707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.012 [2024-11-19 11:25:11.147668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.012 [2024-11-19 11:25:11.148322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.012 [2024-11-19 11:25:11.148360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.012 [2024-11-19 11:25:11.148371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.012 [2024-11-19 11:25:11.148616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.012 [2024-11-19 11:25:11.148839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.012 [2024-11-19 11:25:11.148847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.012 [2024-11-19 11:25:11.148855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.012 [2024-11-19 11:25:11.148871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.012 [2024-11-19 11:25:11.161633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.012 [2024-11-19 11:25:11.162305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.012 [2024-11-19 11:25:11.162343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.012 [2024-11-19 11:25:11.162354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.012 [2024-11-19 11:25:11.162593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.012 [2024-11-19 11:25:11.162816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.012 [2024-11-19 11:25:11.162824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.012 [2024-11-19 11:25:11.162832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.012 [2024-11-19 11:25:11.162840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.012 [2024-11-19 11:25:11.175620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.012 [2024-11-19 11:25:11.176220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.012 [2024-11-19 11:25:11.176240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.012 [2024-11-19 11:25:11.176248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.012 [2024-11-19 11:25:11.176467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.012 [2024-11-19 11:25:11.176686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.012 [2024-11-19 11:25:11.176694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.012 [2024-11-19 11:25:11.176701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.012 [2024-11-19 11:25:11.176709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.012 [2024-11-19 11:25:11.189485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.012 [2024-11-19 11:25:11.190156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.012 [2024-11-19 11:25:11.190194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.190206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.190446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.013 [2024-11-19 11:25:11.190669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.013 [2024-11-19 11:25:11.190678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.013 [2024-11-19 11:25:11.190693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.013 [2024-11-19 11:25:11.190702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.013 [2024-11-19 11:25:11.203483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.013 [2024-11-19 11:25:11.204158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.013 [2024-11-19 11:25:11.204196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.204207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.204445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.013 [2024-11-19 11:25:11.204668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.013 [2024-11-19 11:25:11.204677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.013 [2024-11-19 11:25:11.204685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.013 [2024-11-19 11:25:11.204694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.013 [2024-11-19 11:25:11.217477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.013 [2024-11-19 11:25:11.218183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.013 [2024-11-19 11:25:11.218221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.218232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.218470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.013 [2024-11-19 11:25:11.218693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.013 [2024-11-19 11:25:11.218702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.013 [2024-11-19 11:25:11.218710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.013 [2024-11-19 11:25:11.218718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.013 [2024-11-19 11:25:11.231274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.013 [2024-11-19 11:25:11.231828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.013 [2024-11-19 11:25:11.231846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.231854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.232079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.013 [2024-11-19 11:25:11.232299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.013 [2024-11-19 11:25:11.232307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.013 [2024-11-19 11:25:11.232314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.013 [2024-11-19 11:25:11.232321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.013 [2024-11-19 11:25:11.245068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.013 [2024-11-19 11:25:11.245601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.013 [2024-11-19 11:25:11.245617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.245625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.245843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.013 [2024-11-19 11:25:11.246067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.013 [2024-11-19 11:25:11.246076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.013 [2024-11-19 11:25:11.246083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.013 [2024-11-19 11:25:11.246090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.013 [2024-11-19 11:25:11.259032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.013 [2024-11-19 11:25:11.259615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.013 [2024-11-19 11:25:11.259631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.259638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.259856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.013 [2024-11-19 11:25:11.260079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.013 [2024-11-19 11:25:11.260088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.013 [2024-11-19 11:25:11.260095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.013 [2024-11-19 11:25:11.260101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.013 [2024-11-19 11:25:11.272850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.013 [2024-11-19 11:25:11.273376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.013 [2024-11-19 11:25:11.273414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.273425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.273663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.013 [2024-11-19 11:25:11.273893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.013 [2024-11-19 11:25:11.273902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.013 [2024-11-19 11:25:11.273910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.013 [2024-11-19 11:25:11.273918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.013 [2024-11-19 11:25:11.286666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.013 [2024-11-19 11:25:11.287215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.013 [2024-11-19 11:25:11.287235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.287246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.287466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.013 [2024-11-19 11:25:11.287685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.013 [2024-11-19 11:25:11.287692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.013 [2024-11-19 11:25:11.287699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.013 [2024-11-19 11:25:11.287706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.013 [2024-11-19 11:25:11.300658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.013 [2024-11-19 11:25:11.301199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.013 [2024-11-19 11:25:11.301216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.013 [2024-11-19 11:25:11.301224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.013 [2024-11-19 11:25:11.301442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.014 [2024-11-19 11:25:11.301661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.014 [2024-11-19 11:25:11.301670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.014 [2024-11-19 11:25:11.301677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.014 [2024-11-19 11:25:11.301684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.014 [2024-11-19 11:25:11.314634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.014 [2024-11-19 11:25:11.315156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.014 [2024-11-19 11:25:11.315172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.014 [2024-11-19 11:25:11.315180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.014 [2024-11-19 11:25:11.315398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.014 [2024-11-19 11:25:11.315616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.014 [2024-11-19 11:25:11.315633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.014 [2024-11-19 11:25:11.315640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.014 [2024-11-19 11:25:11.315647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.014 [2024-11-19 11:25:11.328603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.014 [2024-11-19 11:25:11.329139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.014 [2024-11-19 11:25:11.329156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.014 [2024-11-19 11:25:11.329163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.014 [2024-11-19 11:25:11.329381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.014 [2024-11-19 11:25:11.329604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.014 [2024-11-19 11:25:11.329613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.014 [2024-11-19 11:25:11.329620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.014 [2024-11-19 11:25:11.329627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.014 [2024-11-19 11:25:11.342398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.014 [2024-11-19 11:25:11.343063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.014 [2024-11-19 11:25:11.343100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.014 [2024-11-19 11:25:11.343111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.014 [2024-11-19 11:25:11.343350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.014 [2024-11-19 11:25:11.343572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.014 [2024-11-19 11:25:11.343580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.014 [2024-11-19 11:25:11.343588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.014 [2024-11-19 11:25:11.343596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.014 [2024-11-19 11:25:11.356351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.014 [2024-11-19 11:25:11.356964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.014 [2024-11-19 11:25:11.357002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.014 [2024-11-19 11:25:11.357014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.277 [2024-11-19 11:25:11.357254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.277 [2024-11-19 11:25:11.357478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.277 [2024-11-19 11:25:11.357487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.277 [2024-11-19 11:25:11.357494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.277 [2024-11-19 11:25:11.357502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.277 [2024-11-19 11:25:11.370265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.277 [2024-11-19 11:25:11.370816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.277 [2024-11-19 11:25:11.370835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.277 [2024-11-19 11:25:11.370843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.277 [2024-11-19 11:25:11.371068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.277 [2024-11-19 11:25:11.371288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.277 [2024-11-19 11:25:11.371296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.277 [2024-11-19 11:25:11.371308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.277 [2024-11-19 11:25:11.371315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.277 [2024-11-19 11:25:11.384059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.277 [2024-11-19 11:25:11.384672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.277 [2024-11-19 11:25:11.384710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.277 [2024-11-19 11:25:11.384721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.277 [2024-11-19 11:25:11.384967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.277 [2024-11-19 11:25:11.385191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.277 [2024-11-19 11:25:11.385200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.277 [2024-11-19 11:25:11.385209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.277 [2024-11-19 11:25:11.385218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.277 [2024-11-19 11:25:11.397970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.277 [2024-11-19 11:25:11.398654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.277 [2024-11-19 11:25:11.398692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.277 [2024-11-19 11:25:11.398703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.277 [2024-11-19 11:25:11.398950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.277 [2024-11-19 11:25:11.399173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.277 [2024-11-19 11:25:11.399184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.277 [2024-11-19 11:25:11.399192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.277 [2024-11-19 11:25:11.399201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.277 [2024-11-19 11:25:11.411951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.277 [2024-11-19 11:25:11.412588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.277 [2024-11-19 11:25:11.412625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.277 [2024-11-19 11:25:11.412636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.277 [2024-11-19 11:25:11.412882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.277 [2024-11-19 11:25:11.413106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.277 [2024-11-19 11:25:11.413115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.277 [2024-11-19 11:25:11.413122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.277 [2024-11-19 11:25:11.413131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.277 [2024-11-19 11:25:11.425894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.277 [2024-11-19 11:25:11.426448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.277 [2024-11-19 11:25:11.426467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.277 [2024-11-19 11:25:11.426475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.277 [2024-11-19 11:25:11.426695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.277 [2024-11-19 11:25:11.426920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.277 [2024-11-19 11:25:11.426929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.277 [2024-11-19 11:25:11.426936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.277 [2024-11-19 11:25:11.426943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.277 9538.00 IOPS, 37.26 MiB/s [2024-11-19T10:25:11.629Z] [2024-11-19 11:25:11.439882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.277 [2024-11-19 11:25:11.440191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.277 [2024-11-19 11:25:11.440210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.277 [2024-11-19 11:25:11.440218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.277 [2024-11-19 11:25:11.440438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.277 [2024-11-19 11:25:11.440657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.277 [2024-11-19 11:25:11.440665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.277 [2024-11-19 11:25:11.440672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.277 [2024-11-19 11:25:11.440679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.453834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.454384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.454400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.454408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.454626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.454845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.454853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.454860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.454873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.467825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.468477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.468514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.468530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.468768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.469001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.469011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.469019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.469027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.481776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.482456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.482493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.482504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.482742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.482972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.482981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.482989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.482997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.495745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.496313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.496333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.496340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.496559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.496778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.496786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.496793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.496800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.509536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.510208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.510246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.510257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.510495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.510722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.510731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.510738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.510746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.523499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.524047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.524067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.524075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.524294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.524522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.524531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.524538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.524545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.537287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.537822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.537839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.537846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.538069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.538288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.538297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.538304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.538311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.551080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.551620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.551637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.551644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.551868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.552087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.552096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.552107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.552113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.565067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.565602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.565618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.565626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.565844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.566069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.566078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.278 [2024-11-19 11:25:11.566085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.278 [2024-11-19 11:25:11.566092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.278 [2024-11-19 11:25:11.579034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.278 [2024-11-19 11:25:11.579624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.278 [2024-11-19 11:25:11.579640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.278 [2024-11-19 11:25:11.579647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.278 [2024-11-19 11:25:11.579870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.278 [2024-11-19 11:25:11.580090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.278 [2024-11-19 11:25:11.580098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.279 [2024-11-19 11:25:11.580105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.279 [2024-11-19 11:25:11.580112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.279 [2024-11-19 11:25:11.592850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.279 [2024-11-19 11:25:11.593392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.279 [2024-11-19 11:25:11.593409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.279 [2024-11-19 11:25:11.593416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.279 [2024-11-19 11:25:11.593635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.279 [2024-11-19 11:25:11.593854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.279 [2024-11-19 11:25:11.593868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.279 [2024-11-19 11:25:11.593875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.279 [2024-11-19 11:25:11.593882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.279 [2024-11-19 11:25:11.606827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.279 [2024-11-19 11:25:11.607504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.279 [2024-11-19 11:25:11.607542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.279 [2024-11-19 11:25:11.607552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.279 [2024-11-19 11:25:11.607791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.279 [2024-11-19 11:25:11.608022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.279 [2024-11-19 11:25:11.608032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.279 [2024-11-19 11:25:11.608040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.279 [2024-11-19 11:25:11.608048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.279 [2024-11-19 11:25:11.620793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.279 [2024-11-19 11:25:11.621454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.279 [2024-11-19 11:25:11.621492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.279 [2024-11-19 11:25:11.621503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.279 [2024-11-19 11:25:11.621741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.279 [2024-11-19 11:25:11.621971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.279 [2024-11-19 11:25:11.621981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.279 [2024-11-19 11:25:11.621989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.279 [2024-11-19 11:25:11.621997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.542 [2024-11-19 11:25:11.634760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.542 [2024-11-19 11:25:11.635356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.542 [2024-11-19 11:25:11.635375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.542 [2024-11-19 11:25:11.635383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.542 [2024-11-19 11:25:11.635603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.542 [2024-11-19 11:25:11.635822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.542 [2024-11-19 11:25:11.635830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.542 [2024-11-19 11:25:11.635838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.542 [2024-11-19 11:25:11.635845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.542 [2024-11-19 11:25:11.648589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.542 [2024-11-19 11:25:11.649136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.542 [2024-11-19 11:25:11.649154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.542 [2024-11-19 11:25:11.649166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.542 [2024-11-19 11:25:11.649385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.542 [2024-11-19 11:25:11.649604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.542 [2024-11-19 11:25:11.649612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.542 [2024-11-19 11:25:11.649619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.542 [2024-11-19 11:25:11.649625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.542 [2024-11-19 11:25:11.662386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.542 [2024-11-19 11:25:11.662955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.542 [2024-11-19 11:25:11.662973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.542 [2024-11-19 11:25:11.662980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.542 [2024-11-19 11:25:11.663199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.542 [2024-11-19 11:25:11.663417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.542 [2024-11-19 11:25:11.663425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.542 [2024-11-19 11:25:11.663432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.542 [2024-11-19 11:25:11.663439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.542 [2024-11-19 11:25:11.676207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.542 [2024-11-19 11:25:11.676746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.542 [2024-11-19 11:25:11.676763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.542 [2024-11-19 11:25:11.676771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.542 [2024-11-19 11:25:11.676996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.542 [2024-11-19 11:25:11.677215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.542 [2024-11-19 11:25:11.677223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.542 [2024-11-19 11:25:11.677231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.542 [2024-11-19 11:25:11.677238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.542 [2024-11-19 11:25:11.689997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.542 [2024-11-19 11:25:11.690618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.542 [2024-11-19 11:25:11.690656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.542 [2024-11-19 11:25:11.690666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.542 [2024-11-19 11:25:11.690912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.542 [2024-11-19 11:25:11.691141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.542 [2024-11-19 11:25:11.691149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.542 [2024-11-19 11:25:11.691158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.542 [2024-11-19 11:25:11.691166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.542 [2024-11-19 11:25:11.703930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.542 [2024-11-19 11:25:11.704480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.542 [2024-11-19 11:25:11.704499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.542 [2024-11-19 11:25:11.704507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.542 [2024-11-19 11:25:11.704727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.542 [2024-11-19 11:25:11.704953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.542 [2024-11-19 11:25:11.704962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.542 [2024-11-19 11:25:11.704969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.542 [2024-11-19 11:25:11.704977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.542 [2024-11-19 11:25:11.717817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.542 [2024-11-19 11:25:11.718465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.542 [2024-11-19 11:25:11.718503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.718513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.718752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.718982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.718991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.718999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.719007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.731785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.732384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.732403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.732411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.732630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.732849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.732856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.732875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.732882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.745637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.746207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.746226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.746234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.746453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.746671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.746680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.746687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.746694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.759449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.759969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.759986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.759994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.760212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.760430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.760438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.760445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.760452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.773424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.773954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.773970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.773978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.774196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.774415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.774424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.774431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.774438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.787401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.787844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.787861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.787874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.788092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.788310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.788319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.788326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.788332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.801295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.801823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.801839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.801846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.802070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.802289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.802298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.802305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.802312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.815272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.815803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.815819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.815826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.816050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.816268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.816276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.816283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.816290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.829255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.829786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.829801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.829812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.830037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.830256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.830264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.830271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.830278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.843246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.543 [2024-11-19 11:25:11.843773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.543 [2024-11-19 11:25:11.843789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.543 [2024-11-19 11:25:11.843797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.543 [2024-11-19 11:25:11.844020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.543 [2024-11-19 11:25:11.844239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.543 [2024-11-19 11:25:11.844247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.543 [2024-11-19 11:25:11.844254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.543 [2024-11-19 11:25:11.844261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.543 [2024-11-19 11:25:11.857207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.544 [2024-11-19 11:25:11.857818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.544 [2024-11-19 11:25:11.857856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.544 [2024-11-19 11:25:11.857876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.544 [2024-11-19 11:25:11.858115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.544 [2024-11-19 11:25:11.858338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.544 [2024-11-19 11:25:11.858347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.544 [2024-11-19 11:25:11.858354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.544 [2024-11-19 11:25:11.858362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.544 [2024-11-19 11:25:11.871125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.544 [2024-11-19 11:25:11.871799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.544 [2024-11-19 11:25:11.871836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.544 [2024-11-19 11:25:11.871847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.544 [2024-11-19 11:25:11.872095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.544 [2024-11-19 11:25:11.872327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.544 [2024-11-19 11:25:11.872336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.544 [2024-11-19 11:25:11.872343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.544 [2024-11-19 11:25:11.872351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.544 [2024-11-19 11:25:11.885086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.544 [2024-11-19 11:25:11.885745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.544 [2024-11-19 11:25:11.885782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.544 [2024-11-19 11:25:11.885794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.544 [2024-11-19 11:25:11.886046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.544 [2024-11-19 11:25:11.886270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.544 [2024-11-19 11:25:11.886279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.544 [2024-11-19 11:25:11.886287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.544 [2024-11-19 11:25:11.886295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.806 [2024-11-19 11:25:11.899055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.806 [2024-11-19 11:25:11.899736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.806 [2024-11-19 11:25:11.899773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.806 [2024-11-19 11:25:11.899785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.806 [2024-11-19 11:25:11.900033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.806 [2024-11-19 11:25:11.900257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.806 [2024-11-19 11:25:11.900266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.806 [2024-11-19 11:25:11.900274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.806 [2024-11-19 11:25:11.900282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.806 [2024-11-19 11:25:11.913032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.806 [2024-11-19 11:25:11.913519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.806 [2024-11-19 11:25:11.913538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.806 [2024-11-19 11:25:11.913546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.806 [2024-11-19 11:25:11.913765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.806 [2024-11-19 11:25:11.913991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.806 [2024-11-19 11:25:11.913999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.806 [2024-11-19 11:25:11.914012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.806 [2024-11-19 11:25:11.914020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.806 [2024-11-19 11:25:11.926983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.806 [2024-11-19 11:25:11.927516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.806 [2024-11-19 11:25:11.927533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.806 [2024-11-19 11:25:11.927541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.806 [2024-11-19 11:25:11.927760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.806 [2024-11-19 11:25:11.927985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.806 [2024-11-19 11:25:11.927994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.806 [2024-11-19 11:25:11.928001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.806 [2024-11-19 11:25:11.928007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:11.940968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:11.941509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:11.941525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:11.941532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:11.941751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:11.941975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:11.941985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:11.941992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:11.941998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:11.954777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:11.955319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:11.955336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:11.955344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:11.955563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:11.955781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:11.955789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:11.955796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:11.955803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:11.968766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:11.969223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:11.969240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:11.969247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:11.969465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:11.969684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:11.969692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:11.969699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:11.969706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:11.982665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:11.983128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:11.983144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:11.983152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:11.983370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:11.983588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:11.983597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:11.983604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:11.983610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:11.996563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:11.997102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:11.997119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:11.997127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:11.997345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:11.997564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:11.997572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:11.997579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:11.997586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:12.010524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:12.011052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:12.011068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:12.011079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:12.011298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:12.011516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:12.011530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:12.011537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:12.011544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:12.024483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:12.025148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:12.025186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:12.025197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:12.025435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:12.025657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:12.025666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:12.025673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:12.025681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:12.038433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:12.038873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:12.038893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:12.038901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:12.039120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:12.039339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:12.039347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:12.039355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:12.039362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:12.052305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:12.052836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:12.052852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:12.052860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:12.053084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:12.053303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:12.053316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:12.053323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.807 [2024-11-19 11:25:12.053329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.807 [2024-11-19 11:25:12.066270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.807 [2024-11-19 11:25:12.066887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.807 [2024-11-19 11:25:12.066925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.807 [2024-11-19 11:25:12.066935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.807 [2024-11-19 11:25:12.067173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.807 [2024-11-19 11:25:12.067396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.807 [2024-11-19 11:25:12.067405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.807 [2024-11-19 11:25:12.067412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.808 [2024-11-19 11:25:12.067420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.808 [2024-11-19 11:25:12.080167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.808 [2024-11-19 11:25:12.080840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.808 [2024-11-19 11:25:12.080884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.808 [2024-11-19 11:25:12.080897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.808 [2024-11-19 11:25:12.081137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.808 [2024-11-19 11:25:12.081359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.808 [2024-11-19 11:25:12.081368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.808 [2024-11-19 11:25:12.081376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.808 [2024-11-19 11:25:12.081384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.808 [2024-11-19 11:25:12.094343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.808 [2024-11-19 11:25:12.095057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.808 [2024-11-19 11:25:12.095095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.808 [2024-11-19 11:25:12.095106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.808 [2024-11-19 11:25:12.095344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.808 [2024-11-19 11:25:12.095567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.808 [2024-11-19 11:25:12.095575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.808 [2024-11-19 11:25:12.095582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.808 [2024-11-19 11:25:12.095595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.808 [2024-11-19 11:25:12.108138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.808 [2024-11-19 11:25:12.108812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.808 [2024-11-19 11:25:12.108850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.808 [2024-11-19 11:25:12.108870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.808 [2024-11-19 11:25:12.109109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.808 [2024-11-19 11:25:12.109332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.808 [2024-11-19 11:25:12.109340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.808 [2024-11-19 11:25:12.109347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.808 [2024-11-19 11:25:12.109355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.808 [2024-11-19 11:25:12.122104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.808 [2024-11-19 11:25:12.122779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.808 [2024-11-19 11:25:12.122817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.808 [2024-11-19 11:25:12.122829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.808 [2024-11-19 11:25:12.123078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.808 [2024-11-19 11:25:12.123302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.808 [2024-11-19 11:25:12.123310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.808 [2024-11-19 11:25:12.123317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.808 [2024-11-19 11:25:12.123325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.808 [2024-11-19 11:25:12.136074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.808 [2024-11-19 11:25:12.136739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.808 [2024-11-19 11:25:12.136777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.808 [2024-11-19 11:25:12.136788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.808 [2024-11-19 11:25:12.137036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.808 [2024-11-19 11:25:12.137260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.808 [2024-11-19 11:25:12.137269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.808 [2024-11-19 11:25:12.137277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.808 [2024-11-19 11:25:12.137285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:03.808 [2024-11-19 11:25:12.150045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:03.808 [2024-11-19 11:25:12.150682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.808 [2024-11-19 11:25:12.150719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:03.808 [2024-11-19 11:25:12.150730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:03.808 [2024-11-19 11:25:12.150976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:03.808 [2024-11-19 11:25:12.151200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:03.808 [2024-11-19 11:25:12.151208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:03.808 [2024-11-19 11:25:12.151216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:03.808 [2024-11-19 11:25:12.151224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.071 [2024-11-19 11:25:12.163905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.071 [2024-11-19 11:25:12.164463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.071 [2024-11-19 11:25:12.164482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.071 [2024-11-19 11:25:12.164490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.071 [2024-11-19 11:25:12.164710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.071 [2024-11-19 11:25:12.164942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.071 [2024-11-19 11:25:12.164951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.071 [2024-11-19 11:25:12.164958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.071 [2024-11-19 11:25:12.164965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.071 [2024-11-19 11:25:12.177701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.071 [2024-11-19 11:25:12.178339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.071 [2024-11-19 11:25:12.178377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.071 [2024-11-19 11:25:12.178387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.071 [2024-11-19 11:25:12.178626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.071 [2024-11-19 11:25:12.178848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.071 [2024-11-19 11:25:12.178856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.071 [2024-11-19 11:25:12.178874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.071 [2024-11-19 11:25:12.178882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.071 [2024-11-19 11:25:12.191620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.071 [2024-11-19 11:25:12.192191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.071 [2024-11-19 11:25:12.192229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.071 [2024-11-19 11:25:12.192240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.071 [2024-11-19 11:25:12.192483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.071 [2024-11-19 11:25:12.192706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.071 [2024-11-19 11:25:12.192714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.071 [2024-11-19 11:25:12.192722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.071 [2024-11-19 11:25:12.192730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.071 [2024-11-19 11:25:12.205475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.071 [2024-11-19 11:25:12.206165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.071 [2024-11-19 11:25:12.206202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.071 [2024-11-19 11:25:12.206213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.071 [2024-11-19 11:25:12.206452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.071 [2024-11-19 11:25:12.206674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.071 [2024-11-19 11:25:12.206683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.071 [2024-11-19 11:25:12.206690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.071 [2024-11-19 11:25:12.206698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.071 [2024-11-19 11:25:12.219444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.071 [2024-11-19 11:25:12.220020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.071 [2024-11-19 11:25:12.220039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.220047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.220267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.220485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.220494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.220501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.220508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.233251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.233880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.233917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.233930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.234171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.234394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.234412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.234420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.234429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.247172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.247716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.247735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.247743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.247969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.248189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.248198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.248205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.248212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.261155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.261682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.261720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.261732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.261990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.262215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.262224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.262232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.262240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.274987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.275626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.275663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.275674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.275920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.276144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.276152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.276161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.276173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.288918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.289549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.289587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.289598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.289836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.290068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.290078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.290086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.290094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.302831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.303514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.303552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.303562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.303801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.304032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.304042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.304049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.304057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.316798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.317386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.317405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.317413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.317632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.317851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.317859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.317873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.317880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.330617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.331166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.331184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.331191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.331410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.331628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.331637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.331644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.331651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.344587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.345239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.072 [2024-11-19 11:25:12.345276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.072 [2024-11-19 11:25:12.345287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.072 [2024-11-19 11:25:12.345525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.072 [2024-11-19 11:25:12.345748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.072 [2024-11-19 11:25:12.345756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.072 [2024-11-19 11:25:12.345764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.072 [2024-11-19 11:25:12.345772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.072 [2024-11-19 11:25:12.358534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.072 [2024-11-19 11:25:12.359158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.073 [2024-11-19 11:25:12.359195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.073 [2024-11-19 11:25:12.359206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.073 [2024-11-19 11:25:12.359444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.073 [2024-11-19 11:25:12.359667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.073 [2024-11-19 11:25:12.359675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.073 [2024-11-19 11:25:12.359683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.073 [2024-11-19 11:25:12.359691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.073 [2024-11-19 11:25:12.372476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.073 [2024-11-19 11:25:12.373108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.073 [2024-11-19 11:25:12.373145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.073 [2024-11-19 11:25:12.373156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.073 [2024-11-19 11:25:12.373399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.073 [2024-11-19 11:25:12.373621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.073 [2024-11-19 11:25:12.373630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.073 [2024-11-19 11:25:12.373637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.073 [2024-11-19 11:25:12.373645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.073 [2024-11-19 11:25:12.386387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.073 [2024-11-19 11:25:12.387001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.073 [2024-11-19 11:25:12.387038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.073 [2024-11-19 11:25:12.387051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.073 [2024-11-19 11:25:12.387292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.073 [2024-11-19 11:25:12.387515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.073 [2024-11-19 11:25:12.387524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.073 [2024-11-19 11:25:12.387532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.073 [2024-11-19 11:25:12.387540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.073 [2024-11-19 11:25:12.400285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.073 [2024-11-19 11:25:12.400875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.073 [2024-11-19 11:25:12.400895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.073 [2024-11-19 11:25:12.400903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.073 [2024-11-19 11:25:12.401122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.073 [2024-11-19 11:25:12.401342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.073 [2024-11-19 11:25:12.401350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.073 [2024-11-19 11:25:12.401357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.073 [2024-11-19 11:25:12.401363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.073 [2024-11-19 11:25:12.414128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.073 [2024-11-19 11:25:12.414750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.073 [2024-11-19 11:25:12.414788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.073 [2024-11-19 11:25:12.414798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.073 [2024-11-19 11:25:12.415046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.073 [2024-11-19 11:25:12.415269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.073 [2024-11-19 11:25:12.415282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.073 [2024-11-19 11:25:12.415290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.073 [2024-11-19 11:25:12.415298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.335 [2024-11-19 11:25:12.428057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.335 [2024-11-19 11:25:12.428694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.335 [2024-11-19 11:25:12.428731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.335 [2024-11-19 11:25:12.428742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.335 [2024-11-19 11:25:12.428990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.335 [2024-11-19 11:25:12.429214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.335 [2024-11-19 11:25:12.429222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.335 [2024-11-19 11:25:12.429230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.335 [2024-11-19 11:25:12.429238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.335 7153.50 IOPS, 27.94 MiB/s [2024-11-19T10:25:12.687Z] [2024-11-19 11:25:12.441968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.335 [2024-11-19 11:25:12.442518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.335 [2024-11-19 11:25:12.442537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.335 [2024-11-19 11:25:12.442545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.335 [2024-11-19 11:25:12.442764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.335 [2024-11-19 11:25:12.442991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.335 [2024-11-19 11:25:12.443000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.335 [2024-11-19 11:25:12.443007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.335 [2024-11-19 11:25:12.443014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.335 [2024-11-19 11:25:12.455952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.335 [2024-11-19 11:25:12.456570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.335 [2024-11-19 11:25:12.456607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.456618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.456856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.457088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.457097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.457105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.457117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.469905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.470455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.470474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.470482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.470701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.470928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.470938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.470945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.470952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.483741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.484294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.484311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.484319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.484538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.484756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.484765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.484772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.484779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.497713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.498340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.498378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.498388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.498626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.498849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.498857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.498874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.498882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.511640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.512308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.512349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.512360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.512598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.512821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.512828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.512836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.512844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.525592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.526269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.526306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.526316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.526555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.526777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.526786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.526794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.526802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.539564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.540272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.540308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.540319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.540557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.540780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.540787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.540796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.540804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.553550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.554052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.554088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.554100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.554349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.554572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.554579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.554587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.554596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.567348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.568085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.568122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.568134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.568374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.568596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.568604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.568612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.568620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.581190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.581746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.336 [2024-11-19 11:25:12.581782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.336 [2024-11-19 11:25:12.581794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.336 [2024-11-19 11:25:12.582042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.336 [2024-11-19 11:25:12.582266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.336 [2024-11-19 11:25:12.582273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.336 [2024-11-19 11:25:12.582281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.336 [2024-11-19 11:25:12.582290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.336 [2024-11-19 11:25:12.595034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.336 [2024-11-19 11:25:12.595691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.337 [2024-11-19 11:25:12.595728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.337 [2024-11-19 11:25:12.595739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.337 [2024-11-19 11:25:12.595985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.337 [2024-11-19 11:25:12.596208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.337 [2024-11-19 11:25:12.596221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.337 [2024-11-19 11:25:12.596229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.337 [2024-11-19 11:25:12.596237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.337 [2024-11-19 11:25:12.608980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.337 [2024-11-19 11:25:12.609664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.337 [2024-11-19 11:25:12.609700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.337 [2024-11-19 11:25:12.609710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.337 [2024-11-19 11:25:12.609958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.337 [2024-11-19 11:25:12.610189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.337 [2024-11-19 11:25:12.610197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.337 [2024-11-19 11:25:12.610205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.337 [2024-11-19 11:25:12.610213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.337 [2024-11-19 11:25:12.622956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.337 [2024-11-19 11:25:12.623619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.337 [2024-11-19 11:25:12.623656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.337 [2024-11-19 11:25:12.623667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.337 [2024-11-19 11:25:12.623913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.337 [2024-11-19 11:25:12.624136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.337 [2024-11-19 11:25:12.624144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.337 [2024-11-19 11:25:12.624152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.337 [2024-11-19 11:25:12.624161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.337 [2024-11-19 11:25:12.636911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.337 [2024-11-19 11:25:12.637520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.337 [2024-11-19 11:25:12.637557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.337 [2024-11-19 11:25:12.637567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.337 [2024-11-19 11:25:12.637807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.337 [2024-11-19 11:25:12.638038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.337 [2024-11-19 11:25:12.638047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.337 [2024-11-19 11:25:12.638055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.337 [2024-11-19 11:25:12.638063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.337 [2024-11-19 11:25:12.650808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.337 [2024-11-19 11:25:12.651481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.337 [2024-11-19 11:25:12.651518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.337 [2024-11-19 11:25:12.651529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.337 [2024-11-19 11:25:12.651767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.337 [2024-11-19 11:25:12.651999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.337 [2024-11-19 11:25:12.652009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.337 [2024-11-19 11:25:12.652017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.337 [2024-11-19 11:25:12.652025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.337 [2024-11-19 11:25:12.664762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.337 [2024-11-19 11:25:12.665351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.337 [2024-11-19 11:25:12.665370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.337 [2024-11-19 11:25:12.665378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.337 [2024-11-19 11:25:12.665598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.337 [2024-11-19 11:25:12.665816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.337 [2024-11-19 11:25:12.665824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.337 [2024-11-19 11:25:12.665831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.337 [2024-11-19 11:25:12.665839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.337 [2024-11-19 11:25:12.678613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.337 [2024-11-19 11:25:12.679284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.337 [2024-11-19 11:25:12.679320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.337 [2024-11-19 11:25:12.679331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.337 [2024-11-19 11:25:12.679570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.337 [2024-11-19 11:25:12.679792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.337 [2024-11-19 11:25:12.679800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.337 [2024-11-19 11:25:12.679809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.337 [2024-11-19 11:25:12.679817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.599 [2024-11-19 11:25:12.692573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.599 [2024-11-19 11:25:12.693035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.599 [2024-11-19 11:25:12.693058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.599 [2024-11-19 11:25:12.693067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.599 [2024-11-19 11:25:12.693287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.599 [2024-11-19 11:25:12.693505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.599 [2024-11-19 11:25:12.693513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.599 [2024-11-19 11:25:12.693520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.693529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.706484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.707175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.707212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.707225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.707465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.707688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.707696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.707704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.707712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.720470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.721017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.721036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.721044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.721263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.721482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.721490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.721497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.721504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.734258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.734928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.734966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.734978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.735223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.735447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.735455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.735464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.735472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.748222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.748890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.748928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.748940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.749180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.749403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.749411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.749419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.749428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.762186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.762744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.762781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.762793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.763038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.763261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.763270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.763277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.763286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.776042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.776526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.776545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.776553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.776773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.777000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.777009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.777021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.777028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.790004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.790652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.790690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.790700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.790948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.791172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.791180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.791188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.791196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.803937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.804614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.804651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.804662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.804909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.805132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.805140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.805148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.805156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.817896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.600 [2024-11-19 11:25:12.818534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.600 [2024-11-19 11:25:12.818571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.600 [2024-11-19 11:25:12.818582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.600 [2024-11-19 11:25:12.818820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.600 [2024-11-19 11:25:12.819051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.600 [2024-11-19 11:25:12.819061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.600 [2024-11-19 11:25:12.819068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.600 [2024-11-19 11:25:12.819077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.600 [2024-11-19 11:25:12.831832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.832494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.832531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.832542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.832780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.833012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.833021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.833029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.833037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.601 [2024-11-19 11:25:12.845783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.846378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.846398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.846407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.846626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.846845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.846853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.846860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.846871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.601 [2024-11-19 11:25:12.859607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.860237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.860255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.860263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.860481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.860700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.860707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.860714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.860721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.601 [2024-11-19 11:25:12.873466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.874104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.874146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.874159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.874398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.874621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.874629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.874637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.874645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.601 [2024-11-19 11:25:12.887398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.887987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.888025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.888038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.888279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.888502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.888510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.888517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.888526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.601 [2024-11-19 11:25:12.901276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.901817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.901837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.901845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.902069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.902288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.902297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.902305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.902312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.601 [2024-11-19 11:25:12.915260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.915835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.915851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.915859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.916083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.916306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.916314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.916322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.916328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.601 [2024-11-19 11:25:12.929063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.929692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.929730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.929741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.929987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.930210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.930219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.930227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.930235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.601 [2024-11-19 11:25:12.942983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.601 [2024-11-19 11:25:12.943445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.601 [2024-11-19 11:25:12.943466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.601 [2024-11-19 11:25:12.943474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.601 [2024-11-19 11:25:12.943694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.601 [2024-11-19 11:25:12.943918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.601 [2024-11-19 11:25:12.943928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.601 [2024-11-19 11:25:12.943935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.601 [2024-11-19 11:25:12.943942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.864 [2024-11-19 11:25:12.956922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.864 [2024-11-19 11:25:12.957494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.864 [2024-11-19 11:25:12.957532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.864 [2024-11-19 11:25:12.957543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.864 [2024-11-19 11:25:12.957781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.864 [2024-11-19 11:25:12.958011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.864 [2024-11-19 11:25:12.958021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.864 [2024-11-19 11:25:12.958033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.864 [2024-11-19 11:25:12.958042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.864 [2024-11-19 11:25:12.970799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.864 [2024-11-19 11:25:12.971408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.864 [2024-11-19 11:25:12.971428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.864 [2024-11-19 11:25:12.971436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.864 [2024-11-19 11:25:12.971655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.864 [2024-11-19 11:25:12.971880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.864 [2024-11-19 11:25:12.971888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.864 [2024-11-19 11:25:12.971895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.864 [2024-11-19 11:25:12.971902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:12.984647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:12.985269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:12.985308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:12.985320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:12.985560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:12.985783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:12.985791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:12.985799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.865 [2024-11-19 11:25:12.985807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:12.998591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:12.999227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:12.999264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:12.999275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:12.999514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:12.999737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:12.999745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:12.999753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.865 [2024-11-19 11:25:12.999761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:13.012522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:13.013216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:13.013254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:13.013267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:13.013508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:13.013731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:13.013740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:13.013747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.865 [2024-11-19 11:25:13.013756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:13.026513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:13.027041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:13.027061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:13.027068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:13.027288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:13.027506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:13.027515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:13.027522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.865 [2024-11-19 11:25:13.027529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:13.040495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:13.041176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:13.041214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:13.041225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:13.041462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:13.041685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:13.041695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:13.041702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.865 [2024-11-19 11:25:13.041711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:13.054466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:13.055036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:13.055075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:13.055091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:13.055331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:13.055554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:13.055562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:13.055570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.865 [2024-11-19 11:25:13.055578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:13.068347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:13.068779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:13.068799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:13.068807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:13.069031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:13.069251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:13.069259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:13.069266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.865 [2024-11-19 11:25:13.069273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:13.082221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:13.082798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:13.082814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:13.082822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:13.083047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:13.083267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:13.083274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:13.083281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.865 [2024-11-19 11:25:13.083288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.865 [2024-11-19 11:25:13.096029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.865 [2024-11-19 11:25:13.096608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.865 [2024-11-19 11:25:13.096626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.865 [2024-11-19 11:25:13.096634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.865 [2024-11-19 11:25:13.096853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.865 [2024-11-19 11:25:13.097081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.865 [2024-11-19 11:25:13.097089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.865 [2024-11-19 11:25:13.097097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.097104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.866 [2024-11-19 11:25:13.109852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.866 [2024-11-19 11:25:13.110504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.866 [2024-11-19 11:25:13.110543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.866 [2024-11-19 11:25:13.110554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.866 [2024-11-19 11:25:13.110792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.866 [2024-11-19 11:25:13.111022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.866 [2024-11-19 11:25:13.111032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.866 [2024-11-19 11:25:13.111040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.111048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.866 [2024-11-19 11:25:13.123795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.866 [2024-11-19 11:25:13.124404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.866 [2024-11-19 11:25:13.124442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.866 [2024-11-19 11:25:13.124454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.866 [2024-11-19 11:25:13.124694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.866 [2024-11-19 11:25:13.124924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.866 [2024-11-19 11:25:13.124933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.866 [2024-11-19 11:25:13.124941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.124949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.866 [2024-11-19 11:25:13.137704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.866 [2024-11-19 11:25:13.138250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.866 [2024-11-19 11:25:13.138270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.866 [2024-11-19 11:25:13.138278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.866 [2024-11-19 11:25:13.138498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.866 [2024-11-19 11:25:13.138718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.866 [2024-11-19 11:25:13.138726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.866 [2024-11-19 11:25:13.138738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.138746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.866 [2024-11-19 11:25:13.151492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.866 [2024-11-19 11:25:13.152154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.866 [2024-11-19 11:25:13.152192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.866 [2024-11-19 11:25:13.152203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.866 [2024-11-19 11:25:13.152442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.866 [2024-11-19 11:25:13.152664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.866 [2024-11-19 11:25:13.152672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.866 [2024-11-19 11:25:13.152680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.152688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.866 [2024-11-19 11:25:13.165452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.866 [2024-11-19 11:25:13.166008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.866 [2024-11-19 11:25:13.166028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.866 [2024-11-19 11:25:13.166036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.866 [2024-11-19 11:25:13.166255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.866 [2024-11-19 11:25:13.166474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.866 [2024-11-19 11:25:13.166481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.866 [2024-11-19 11:25:13.166488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.166495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.866 [2024-11-19 11:25:13.179446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.866 [2024-11-19 11:25:13.179995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.866 [2024-11-19 11:25:13.180013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.866 [2024-11-19 11:25:13.180021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.866 [2024-11-19 11:25:13.180239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.866 [2024-11-19 11:25:13.180458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.866 [2024-11-19 11:25:13.180466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.866 [2024-11-19 11:25:13.180473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.180480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.866 [2024-11-19 11:25:13.193324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.866 [2024-11-19 11:25:13.193891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.866 [2024-11-19 11:25:13.193909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.866 [2024-11-19 11:25:13.193917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.866 [2024-11-19 11:25:13.194135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.866 [2024-11-19 11:25:13.194353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.866 [2024-11-19 11:25:13.194362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.866 [2024-11-19 11:25:13.194369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.194375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:04.866 [2024-11-19 11:25:13.207150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:04.866 [2024-11-19 11:25:13.207598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.866 [2024-11-19 11:25:13.207615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:04.866 [2024-11-19 11:25:13.207623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:04.866 [2024-11-19 11:25:13.207842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:04.866 [2024-11-19 11:25:13.208066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:04.866 [2024-11-19 11:25:13.208074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:04.866 [2024-11-19 11:25:13.208082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:04.866 [2024-11-19 11:25:13.208088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.129 [2024-11-19 11:25:13.221033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.129 [2024-11-19 11:25:13.221563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.129 [2024-11-19 11:25:13.221579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.129 [2024-11-19 11:25:13.221586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.129 [2024-11-19 11:25:13.221804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.129 [2024-11-19 11:25:13.222027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.129 [2024-11-19 11:25:13.222037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.129 [2024-11-19 11:25:13.222044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.129 [2024-11-19 11:25:13.222051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.129 [2024-11-19 11:25:13.235010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.129 [2024-11-19 11:25:13.235661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.129 [2024-11-19 11:25:13.235699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.129 [2024-11-19 11:25:13.235719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.129 [2024-11-19 11:25:13.235966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.129 [2024-11-19 11:25:13.236189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.129 [2024-11-19 11:25:13.236198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.129 [2024-11-19 11:25:13.236205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.129 [2024-11-19 11:25:13.236213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.129 [2024-11-19 11:25:13.248960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.129 [2024-11-19 11:25:13.249546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.129 [2024-11-19 11:25:13.249565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.249573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.249792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.250017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.250027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.250034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.250042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.262784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.263344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.263382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.263394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.263636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.263858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.263879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.263888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.263896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.276654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.277312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.277350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.277361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.277599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.277827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.277835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.277843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.277851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.290612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.291267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.291305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.291316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.291554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.291777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.291785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.291793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.291801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.304550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.305133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.305154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.305162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.305381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.305599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.305607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.305614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.305621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.318366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.318964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.319002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.319014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.319254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.319477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.319486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.319497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.319505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.332273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.332817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.332836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.332844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.333069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.333289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.333297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.333304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.333310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.346257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.346877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.346915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.346926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.347164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.347387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.347395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.347403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.347411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.360158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.360709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.360727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.360735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.360959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.361179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.361187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.361194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.361201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.373967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.130 [2024-11-19 11:25:13.374548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.130 [2024-11-19 11:25:13.374565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.130 [2024-11-19 11:25:13.374573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.130 [2024-11-19 11:25:13.374791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.130 [2024-11-19 11:25:13.375016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.130 [2024-11-19 11:25:13.375027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.130 [2024-11-19 11:25:13.375033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.130 [2024-11-19 11:25:13.375040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.130 [2024-11-19 11:25:13.387778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.131 [2024-11-19 11:25:13.388400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.131 [2024-11-19 11:25:13.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.131 [2024-11-19 11:25:13.388450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.131 [2024-11-19 11:25:13.388690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.131 [2024-11-19 11:25:13.388921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.131 [2024-11-19 11:25:13.388931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.131 [2024-11-19 11:25:13.388939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.131 [2024-11-19 11:25:13.388947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.131 [2024-11-19 11:25:13.401692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.131 [2024-11-19 11:25:13.402238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.131 [2024-11-19 11:25:13.402259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.131 [2024-11-19 11:25:13.402267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.131 [2024-11-19 11:25:13.402486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.131 [2024-11-19 11:25:13.402705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.131 [2024-11-19 11:25:13.402713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.131 [2024-11-19 11:25:13.402720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.131 [2024-11-19 11:25:13.402727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.131 [2024-11-19 11:25:13.415498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.131 [2024-11-19 11:25:13.416176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.131 [2024-11-19 11:25:13.416214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.131 [2024-11-19 11:25:13.416231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.131 [2024-11-19 11:25:13.416469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.131 [2024-11-19 11:25:13.416692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.131 [2024-11-19 11:25:13.416700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.131 [2024-11-19 11:25:13.416708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.131 [2024-11-19 11:25:13.416716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.131 [2024-11-19 11:25:13.429465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.131 [2024-11-19 11:25:13.430048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.131 [2024-11-19 11:25:13.430068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.131 [2024-11-19 11:25:13.430076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.131 [2024-11-19 11:25:13.430295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.131 [2024-11-19 11:25:13.430514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.131 [2024-11-19 11:25:13.430523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.131 [2024-11-19 11:25:13.430530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.131 [2024-11-19 11:25:13.430537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.131 5722.80 IOPS, 22.35 MiB/s [2024-11-19T10:25:13.483Z] [2024-11-19 11:25:13.443274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.131 [2024-11-19 11:25:13.443895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.131 [2024-11-19 11:25:13.443933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.131 [2024-11-19 11:25:13.443944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.131 [2024-11-19 11:25:13.444182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.131 [2024-11-19 11:25:13.444405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.131 [2024-11-19 11:25:13.444413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.131 [2024-11-19 11:25:13.444420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.131 [2024-11-19 11:25:13.444428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.131 [2024-11-19 11:25:13.457185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.131 [2024-11-19 11:25:13.457813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.131 [2024-11-19 11:25:13.457850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.131 [2024-11-19 11:25:13.457869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.131 [2024-11-19 11:25:13.458110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.131 [2024-11-19 11:25:13.458337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.131 [2024-11-19 11:25:13.458346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.131 [2024-11-19 11:25:13.458353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.131 [2024-11-19 11:25:13.458361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.131 [2024-11-19 11:25:13.471122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.131 [2024-11-19 11:25:13.471805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.131 [2024-11-19 11:25:13.471843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.131 [2024-11-19 11:25:13.471855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.131 [2024-11-19 11:25:13.472104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.131 [2024-11-19 11:25:13.472327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.131 [2024-11-19 11:25:13.472336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.131 [2024-11-19 11:25:13.472343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.131 [2024-11-19 11:25:13.472352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.394 [2024-11-19 11:25:13.485097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.394 [2024-11-19 11:25:13.485692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.394 [2024-11-19 11:25:13.485710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.394 [2024-11-19 11:25:13.485719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.394 [2024-11-19 11:25:13.485944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.394 [2024-11-19 11:25:13.486164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.394 [2024-11-19 11:25:13.486172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.394 [2024-11-19 11:25:13.486180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.394 [2024-11-19 11:25:13.486187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.394 [2024-11-19 11:25:13.498929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.394 [2024-11-19 11:25:13.499557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.394 [2024-11-19 11:25:13.499595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.394 [2024-11-19 11:25:13.499605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.394 [2024-11-19 11:25:13.499844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.394 [2024-11-19 11:25:13.500075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.394 [2024-11-19 11:25:13.500085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.394 [2024-11-19 11:25:13.500097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.394 [2024-11-19 11:25:13.500106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.394 [2024-11-19 11:25:13.512852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.394 [2024-11-19 11:25:13.513439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.394 [2024-11-19 11:25:13.513458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.394 [2024-11-19 11:25:13.513466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.394 [2024-11-19 11:25:13.513685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.394 [2024-11-19 11:25:13.513910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.394 [2024-11-19 11:25:13.513919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.394 [2024-11-19 11:25:13.513926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.394 [2024-11-19 11:25:13.513933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.394 [2024-11-19 11:25:13.526675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.394 [2024-11-19 11:25:13.527233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.394 [2024-11-19 11:25:13.527250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.394 [2024-11-19 11:25:13.527258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.394 [2024-11-19 11:25:13.527476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.394 [2024-11-19 11:25:13.527695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.394 [2024-11-19 11:25:13.527703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.394 [2024-11-19 11:25:13.527710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.527717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.395 [2024-11-19 11:25:13.540467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.395 [2024-11-19 11:25:13.541168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.395 [2024-11-19 11:25:13.541205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.395 [2024-11-19 11:25:13.541216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.395 [2024-11-19 11:25:13.541454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.395 [2024-11-19 11:25:13.541677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.395 [2024-11-19 11:25:13.541685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.395 [2024-11-19 11:25:13.541693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.541701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.395 [2024-11-19 11:25:13.554454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.395 [2024-11-19 11:25:13.555153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.395 [2024-11-19 11:25:13.555191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.395 [2024-11-19 11:25:13.555201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.395 [2024-11-19 11:25:13.555440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.395 [2024-11-19 11:25:13.555663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.395 [2024-11-19 11:25:13.555671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.395 [2024-11-19 11:25:13.555679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.555687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.395 [2024-11-19 11:25:13.568452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.395 [2024-11-19 11:25:13.569147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.395 [2024-11-19 11:25:13.569184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.395 [2024-11-19 11:25:13.569195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.395 [2024-11-19 11:25:13.569433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.395 [2024-11-19 11:25:13.569656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.395 [2024-11-19 11:25:13.569665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.395 [2024-11-19 11:25:13.569672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.569680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.395 [2024-11-19 11:25:13.582430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.395 [2024-11-19 11:25:13.582920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.395 [2024-11-19 11:25:13.582940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.395 [2024-11-19 11:25:13.582947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.395 [2024-11-19 11:25:13.583167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.395 [2024-11-19 11:25:13.583386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.395 [2024-11-19 11:25:13.583396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.395 [2024-11-19 11:25:13.583403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.583410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.395 [2024-11-19 11:25:13.596355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.395 [2024-11-19 11:25:13.596874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.395 [2024-11-19 11:25:13.596891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.395 [2024-11-19 11:25:13.596904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.395 [2024-11-19 11:25:13.597123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.395 [2024-11-19 11:25:13.597341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.395 [2024-11-19 11:25:13.597349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.395 [2024-11-19 11:25:13.597356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.597362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.395 [2024-11-19 11:25:13.610307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.395 [2024-11-19 11:25:13.610977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.395 [2024-11-19 11:25:13.611015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.395 [2024-11-19 11:25:13.611026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.395 [2024-11-19 11:25:13.611264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.395 [2024-11-19 11:25:13.611486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.395 [2024-11-19 11:25:13.611494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.395 [2024-11-19 11:25:13.611502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.611510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.395 [2024-11-19 11:25:13.624288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.395 [2024-11-19 11:25:13.624938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.395 [2024-11-19 11:25:13.624976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.395 [2024-11-19 11:25:13.624988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.395 [2024-11-19 11:25:13.625228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.395 [2024-11-19 11:25:13.625450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.395 [2024-11-19 11:25:13.625458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.395 [2024-11-19 11:25:13.625466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.625474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.395 [2024-11-19 11:25:13.638234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.395 [2024-11-19 11:25:13.638908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.395 [2024-11-19 11:25:13.638946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.395 [2024-11-19 11:25:13.638956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.395 [2024-11-19 11:25:13.639194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.395 [2024-11-19 11:25:13.639418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.395 [2024-11-19 11:25:13.639431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.395 [2024-11-19 11:25:13.639439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.395 [2024-11-19 11:25:13.639447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.396 [2024-11-19 11:25:13.652194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.396 [2024-11-19 11:25:13.652834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.396 [2024-11-19 11:25:13.652878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.396 [2024-11-19 11:25:13.652890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.396 [2024-11-19 11:25:13.653129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.396 [2024-11-19 11:25:13.653351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.396 [2024-11-19 11:25:13.653360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.396 [2024-11-19 11:25:13.653367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.396 [2024-11-19 11:25:13.653375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.396 [2024-11-19 11:25:13.666130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.396 [2024-11-19 11:25:13.666802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.396 [2024-11-19 11:25:13.666839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.396 [2024-11-19 11:25:13.666851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.396 [2024-11-19 11:25:13.667100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.396 [2024-11-19 11:25:13.667323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.396 [2024-11-19 11:25:13.667331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.396 [2024-11-19 11:25:13.667340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.396 [2024-11-19 11:25:13.667348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.396 [2024-11-19 11:25:13.680088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.396 [2024-11-19 11:25:13.680717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.396 [2024-11-19 11:25:13.680755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.396 [2024-11-19 11:25:13.680765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.396 [2024-11-19 11:25:13.681013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.396 [2024-11-19 11:25:13.681236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.396 [2024-11-19 11:25:13.681244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.396 [2024-11-19 11:25:13.681252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.396 [2024-11-19 11:25:13.681265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.396 [2024-11-19 11:25:13.694090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.396 [2024-11-19 11:25:13.694744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.396 [2024-11-19 11:25:13.694782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.396 [2024-11-19 11:25:13.694793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.396 [2024-11-19 11:25:13.695039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.396 [2024-11-19 11:25:13.695263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.396 [2024-11-19 11:25:13.695271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.396 [2024-11-19 11:25:13.695278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.396 [2024-11-19 11:25:13.695286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.396 [2024-11-19 11:25:13.708026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.396 [2024-11-19 11:25:13.708609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.396 [2024-11-19 11:25:13.708628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.396 [2024-11-19 11:25:13.708636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.396 [2024-11-19 11:25:13.708855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.396 [2024-11-19 11:25:13.709081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.396 [2024-11-19 11:25:13.709090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.396 [2024-11-19 11:25:13.709097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.396 [2024-11-19 11:25:13.709104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.396 [2024-11-19 11:25:13.721834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.396 [2024-11-19 11:25:13.722363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.396 [2024-11-19 11:25:13.722381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.396 [2024-11-19 11:25:13.722389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.396 [2024-11-19 11:25:13.722607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.396 [2024-11-19 11:25:13.722826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.396 [2024-11-19 11:25:13.722834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.396 [2024-11-19 11:25:13.722841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.396 [2024-11-19 11:25:13.722847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.396 [2024-11-19 11:25:13.735802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.396 [2024-11-19 11:25:13.736387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.396 [2024-11-19 11:25:13.736404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.396 [2024-11-19 11:25:13.736412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.396 [2024-11-19 11:25:13.736630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.396 [2024-11-19 11:25:13.736849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.396 [2024-11-19 11:25:13.736857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.396 [2024-11-19 11:25:13.736869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.396 [2024-11-19 11:25:13.736876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.659 [2024-11-19 11:25:13.749605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.659 [2024-11-19 11:25:13.750258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.659 [2024-11-19 11:25:13.750296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.659 [2024-11-19 11:25:13.750307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.659 [2024-11-19 11:25:13.750545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.659 [2024-11-19 11:25:13.750768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.659 [2024-11-19 11:25:13.750776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.659 [2024-11-19 11:25:13.750784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.659 [2024-11-19 11:25:13.750792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.659 [2024-11-19 11:25:13.763549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.659 [2024-11-19 11:25:13.764243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.659 [2024-11-19 11:25:13.764281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.659 [2024-11-19 11:25:13.764294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.659 [2024-11-19 11:25:13.764533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.659 [2024-11-19 11:25:13.764755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.659 [2024-11-19 11:25:13.764766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.659 [2024-11-19 11:25:13.764774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.659 [2024-11-19 11:25:13.764782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.659 [2024-11-19 11:25:13.777540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.659 [2024-11-19 11:25:13.778159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.659 [2024-11-19 11:25:13.778197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.659 [2024-11-19 11:25:13.778208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.659 [2024-11-19 11:25:13.778451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.659 [2024-11-19 11:25:13.778674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.659 [2024-11-19 11:25:13.778683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.659 [2024-11-19 11:25:13.778690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.659 [2024-11-19 11:25:13.778698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.659 [2024-11-19 11:25:13.791447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.659 [2024-11-19 11:25:13.792154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.659 [2024-11-19 11:25:13.792192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.659 [2024-11-19 11:25:13.792203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.659 [2024-11-19 11:25:13.792441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.659 [2024-11-19 11:25:13.792664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.659 [2024-11-19 11:25:13.792672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.659 [2024-11-19 11:25:13.792680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.659 [2024-11-19 11:25:13.792688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.659 [2024-11-19 11:25:13.805436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.659 [2024-11-19 11:25:13.805985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.659 [2024-11-19 11:25:13.806004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.659 [2024-11-19 11:25:13.806012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.659 [2024-11-19 11:25:13.806231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.659 [2024-11-19 11:25:13.806450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.659 [2024-11-19 11:25:13.806457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.659 [2024-11-19 11:25:13.806465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.659 [2024-11-19 11:25:13.806472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.659 [2024-11-19 11:25:13.819419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.659 [2024-11-19 11:25:13.819962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.659 [2024-11-19 11:25:13.819979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.659 [2024-11-19 11:25:13.819987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.659 [2024-11-19 11:25:13.820205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.659 [2024-11-19 11:25:13.820423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.659 [2024-11-19 11:25:13.820436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.659 [2024-11-19 11:25:13.820443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.820449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.833227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.833692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.833710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.660 [2024-11-19 11:25:13.833719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.660 [2024-11-19 11:25:13.833943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.660 [2024-11-19 11:25:13.834163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.660 [2024-11-19 11:25:13.834171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.660 [2024-11-19 11:25:13.834178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.834185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.847121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.847647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.847663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.660 [2024-11-19 11:25:13.847670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.660 [2024-11-19 11:25:13.847893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.660 [2024-11-19 11:25:13.848112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.660 [2024-11-19 11:25:13.848120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.660 [2024-11-19 11:25:13.848127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.848133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.861071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.861737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.861774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.660 [2024-11-19 11:25:13.861785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.660 [2024-11-19 11:25:13.862031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.660 [2024-11-19 11:25:13.862254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.660 [2024-11-19 11:25:13.862263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.660 [2024-11-19 11:25:13.862271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.862283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.875036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.875707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.875745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.660 [2024-11-19 11:25:13.875755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.660 [2024-11-19 11:25:13.876003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.660 [2024-11-19 11:25:13.876226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.660 [2024-11-19 11:25:13.876234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.660 [2024-11-19 11:25:13.876242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.876250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.889007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.889682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.889720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.660 [2024-11-19 11:25:13.889731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.660 [2024-11-19 11:25:13.889980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.660 [2024-11-19 11:25:13.890203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.660 [2024-11-19 11:25:13.890211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.660 [2024-11-19 11:25:13.890219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.890227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.902979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.903498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.903536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.660 [2024-11-19 11:25:13.903546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.660 [2024-11-19 11:25:13.903785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.660 [2024-11-19 11:25:13.904017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.660 [2024-11-19 11:25:13.904026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.660 [2024-11-19 11:25:13.904034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.904042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.916790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.917472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.917510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.660 [2024-11-19 11:25:13.917521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.660 [2024-11-19 11:25:13.917759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.660 [2024-11-19 11:25:13.917991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.660 [2024-11-19 11:25:13.918000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.660 [2024-11-19 11:25:13.918008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.918016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.930759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.931415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.931452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.660 [2024-11-19 11:25:13.931463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.660 [2024-11-19 11:25:13.931701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.660 [2024-11-19 11:25:13.931932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.660 [2024-11-19 11:25:13.931942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.660 [2024-11-19 11:25:13.931950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.660 [2024-11-19 11:25:13.931958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.660 [2024-11-19 11:25:13.944717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.660 [2024-11-19 11:25:13.945308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.660 [2024-11-19 11:25:13.945327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.661 [2024-11-19 11:25:13.945336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.661 [2024-11-19 11:25:13.945555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.661 [2024-11-19 11:25:13.945773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.661 [2024-11-19 11:25:13.945781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.661 [2024-11-19 11:25:13.945788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.661 [2024-11-19 11:25:13.945795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.661 [2024-11-19 11:25:13.958524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.661 [2024-11-19 11:25:13.959119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.661 [2024-11-19 11:25:13.959157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.661 [2024-11-19 11:25:13.959168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.661 [2024-11-19 11:25:13.959410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.661 [2024-11-19 11:25:13.959633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.661 [2024-11-19 11:25:13.959641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.661 [2024-11-19 11:25:13.959649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.661 [2024-11-19 11:25:13.959657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.661 [2024-11-19 11:25:13.972413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.661 [2024-11-19 11:25:13.973052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.661 [2024-11-19 11:25:13.973089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.661 [2024-11-19 11:25:13.973100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.661 [2024-11-19 11:25:13.973338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.661 [2024-11-19 11:25:13.973561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.661 [2024-11-19 11:25:13.973569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.661 [2024-11-19 11:25:13.973577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.661 [2024-11-19 11:25:13.973585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.661 [2024-11-19 11:25:13.986335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.661 [2024-11-19 11:25:13.987017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.661 [2024-11-19 11:25:13.987056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.661 [2024-11-19 11:25:13.987066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.661 [2024-11-19 11:25:13.987304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.661 [2024-11-19 11:25:13.987527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.661 [2024-11-19 11:25:13.987535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.661 [2024-11-19 11:25:13.987543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.661 [2024-11-19 11:25:13.987551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.661 [2024-11-19 11:25:14.000312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.661 [2024-11-19 11:25:14.000969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.661 [2024-11-19 11:25:14.001007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.661 [2024-11-19 11:25:14.001019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.661 [2024-11-19 11:25:14.001258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.661 [2024-11-19 11:25:14.001481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.661 [2024-11-19 11:25:14.001495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.661 [2024-11-19 11:25:14.001504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.661 [2024-11-19 11:25:14.001512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 [2024-11-19 11:25:14.014274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.014873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.014893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.924 [2024-11-19 11:25:14.014901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.924 [2024-11-19 11:25:14.015121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.924 [2024-11-19 11:25:14.015340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.924 [2024-11-19 11:25:14.015348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.924 [2024-11-19 11:25:14.015356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.924 [2024-11-19 11:25:14.015363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 [2024-11-19 11:25:14.028101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.028673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.028690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.924 [2024-11-19 11:25:14.028697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.924 [2024-11-19 11:25:14.028920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.924 [2024-11-19 11:25:14.029139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.924 [2024-11-19 11:25:14.029147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.924 [2024-11-19 11:25:14.029154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.924 [2024-11-19 11:25:14.029161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 [2024-11-19 11:25:14.041940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.042595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.042633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.924 [2024-11-19 11:25:14.042644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.924 [2024-11-19 11:25:14.042890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.924 [2024-11-19 11:25:14.043114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.924 [2024-11-19 11:25:14.043122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.924 [2024-11-19 11:25:14.043129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.924 [2024-11-19 11:25:14.043142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 [2024-11-19 11:25:14.055892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.056566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.056604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.924 [2024-11-19 11:25:14.056615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.924 [2024-11-19 11:25:14.056853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.924 [2024-11-19 11:25:14.057085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.924 [2024-11-19 11:25:14.057095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.924 [2024-11-19 11:25:14.057103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.924 [2024-11-19 11:25:14.057111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 [2024-11-19 11:25:14.069854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.070449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.070468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.924 [2024-11-19 11:25:14.070476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.924 [2024-11-19 11:25:14.070695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.924 [2024-11-19 11:25:14.070921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.924 [2024-11-19 11:25:14.070930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.924 [2024-11-19 11:25:14.070937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.924 [2024-11-19 11:25:14.070944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 [2024-11-19 11:25:14.083727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.084385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.084422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.924 [2024-11-19 11:25:14.084434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.924 [2024-11-19 11:25:14.084672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.924 [2024-11-19 11:25:14.084903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.924 [2024-11-19 11:25:14.084912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.924 [2024-11-19 11:25:14.084920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.924 [2024-11-19 11:25:14.084929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 [2024-11-19 11:25:14.097649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.098288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.098331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.924 [2024-11-19 11:25:14.098341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.924 [2024-11-19 11:25:14.098580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.924 [2024-11-19 11:25:14.098803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.924 [2024-11-19 11:25:14.098812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.924 [2024-11-19 11:25:14.098819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.924 [2024-11-19 11:25:14.098827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 [2024-11-19 11:25:14.111591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.112223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.112261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.924 [2024-11-19 11:25:14.112272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.924 [2024-11-19 11:25:14.112510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.924 [2024-11-19 11:25:14.112733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.924 [2024-11-19 11:25:14.112741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.924 [2024-11-19 11:25:14.112749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.924 [2024-11-19 11:25:14.112757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 137683 Killed "${NVMF_APP[@]}" "$@" 00:31:05.924 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:05.924 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:05.924 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:05.924 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:05.924 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:05.924 [2024-11-19 11:25:14.125514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.924 [2024-11-19 11:25:14.126063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.924 [2024-11-19 11:25:14.126084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.126091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.126311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.126531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.925 [2024-11-19 11:25:14.126539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.925 [2024-11-19 11:25:14.126546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.925 [2024-11-19 11:25:14.126553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=139397 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 139397 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 139397 ']' 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.925 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:05.925 [2024-11-19 11:25:14.139334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.925 [2024-11-19 11:25:14.140073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.925 [2024-11-19 11:25:14.140111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.140122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.140361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.140584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.925 [2024-11-19 11:25:14.140593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.925 [2024-11-19 11:25:14.140601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.925 [2024-11-19 11:25:14.140610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.925 [2024-11-19 11:25:14.153167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.925 [2024-11-19 11:25:14.153842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.925 [2024-11-19 11:25:14.153887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.153901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.154141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.154364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.925 [2024-11-19 11:25:14.154382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.925 [2024-11-19 11:25:14.154391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.925 [2024-11-19 11:25:14.154400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.925 [2024-11-19 11:25:14.166977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.925 [2024-11-19 11:25:14.167538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.925 [2024-11-19 11:25:14.167576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.167593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.167833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.168064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.925 [2024-11-19 11:25:14.168074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.925 [2024-11-19 11:25:14.168082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.925 [2024-11-19 11:25:14.168090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.925 [2024-11-19 11:25:14.180831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.925 [2024-11-19 11:25:14.181509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.925 [2024-11-19 11:25:14.181547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.181558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.181796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.182027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.925 [2024-11-19 11:25:14.182036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.925 [2024-11-19 11:25:14.182044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.925 [2024-11-19 11:25:14.182052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.925 [2024-11-19 11:25:14.187521] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:31:05.925 [2024-11-19 11:25:14.187574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.925 [2024-11-19 11:25:14.194795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.925 [2024-11-19 11:25:14.195458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.925 [2024-11-19 11:25:14.195497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.195508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.195746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.195976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.925 [2024-11-19 11:25:14.195986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.925 [2024-11-19 11:25:14.195995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.925 [2024-11-19 11:25:14.196003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.925 [2024-11-19 11:25:14.208749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.925 [2024-11-19 11:25:14.209434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.925 [2024-11-19 11:25:14.209472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.209487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.209726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.209957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.925 [2024-11-19 11:25:14.209966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.925 [2024-11-19 11:25:14.209974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.925 [2024-11-19 11:25:14.209984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.925 [2024-11-19 11:25:14.222619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.925 [2024-11-19 11:25:14.223158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.925 [2024-11-19 11:25:14.223178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.223187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.223406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.223625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.925 [2024-11-19 11:25:14.223633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.925 [2024-11-19 11:25:14.223640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.925 [2024-11-19 11:25:14.223647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.925 [2024-11-19 11:25:14.236610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.925 [2024-11-19 11:25:14.237303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.925 [2024-11-19 11:25:14.237341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.925 [2024-11-19 11:25:14.237351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.925 [2024-11-19 11:25:14.237589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.925 [2024-11-19 11:25:14.237812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.926 [2024-11-19 11:25:14.237821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.926 [2024-11-19 11:25:14.237829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.926 [2024-11-19 11:25:14.237838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.926 [2024-11-19 11:25:14.250412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.926 [2024-11-19 11:25:14.250982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.926 [2024-11-19 11:25:14.251020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.926 [2024-11-19 11:25:14.251033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.926 [2024-11-19 11:25:14.251274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.926 [2024-11-19 11:25:14.251501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.926 [2024-11-19 11:25:14.251510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.926 [2024-11-19 11:25:14.251518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.926 [2024-11-19 11:25:14.251526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.926 [2024-11-19 11:25:14.264288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.926 [2024-11-19 11:25:14.264984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.926 [2024-11-19 11:25:14.265021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:05.926 [2024-11-19 11:25:14.265033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:05.926 [2024-11-19 11:25:14.265273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:05.926 [2024-11-19 11:25:14.265496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.926 [2024-11-19 11:25:14.265505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.926 [2024-11-19 11:25:14.265513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.926 [2024-11-19 11:25:14.265521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.188 [2024-11-19 11:25:14.278282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.188 [2024-11-19 11:25:14.278999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.188 [2024-11-19 11:25:14.279037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.188 [2024-11-19 11:25:14.279049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.188 [2024-11-19 11:25:14.279287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.188 [2024-11-19 11:25:14.279510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.188 [2024-11-19 11:25:14.279519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.188 [2024-11-19 11:25:14.279527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.188 [2024-11-19 11:25:14.279536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.188 [2024-11-19 11:25:14.286481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:06.188 [2024-11-19 11:25:14.292076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.188 [2024-11-19 11:25:14.292675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.188 [2024-11-19 11:25:14.292694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.188 [2024-11-19 11:25:14.292702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.188 [2024-11-19 11:25:14.292929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.188 [2024-11-19 11:25:14.293149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.188 [2024-11-19 11:25:14.293157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.188 [2024-11-19 11:25:14.293169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.188 [2024-11-19 11:25:14.293176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.188 [2024-11-19 11:25:14.305916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.188 [2024-11-19 11:25:14.306555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.188 [2024-11-19 11:25:14.306593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.188 [2024-11-19 11:25:14.306603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.188 [2024-11-19 11:25:14.306842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.188 [2024-11-19 11:25:14.307073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.188 [2024-11-19 11:25:14.307083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.188 [2024-11-19 11:25:14.307090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.188 [2024-11-19 11:25:14.307098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.188 [2024-11-19 11:25:14.315576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.188 [2024-11-19 11:25:14.315599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.188 [2024-11-19 11:25:14.315606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.188 [2024-11-19 11:25:14.315612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.188 [2024-11-19 11:25:14.315616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.188 [2024-11-19 11:25:14.316816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.188 [2024-11-19 11:25:14.316976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:06.188 [2024-11-19 11:25:14.317070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.188 [2024-11-19 11:25:14.319860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.188 [2024-11-19 11:25:14.320524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.188 [2024-11-19 11:25:14.320563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.188 [2024-11-19 11:25:14.320573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.188 [2024-11-19 11:25:14.320812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.188 [2024-11-19 11:25:14.321043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.188 [2024-11-19 11:25:14.321053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.188 [2024-11-19 11:25:14.321060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.188 [2024-11-19 11:25:14.321069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.188 [2024-11-19 11:25:14.333824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.188 [2024-11-19 11:25:14.334447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.188 [2024-11-19 11:25:14.334486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.334502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.334741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.334971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.334981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.334989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.334997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.347740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.348242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.348262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.348270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.348489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.348708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.348716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.348723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.348730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.361678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.362114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.362131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.362139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.362358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.362577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.362585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.362593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.362599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.375555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.376101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.376118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.376126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.376345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.376569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.376576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.376583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.376590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.389528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.389939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.389955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.389963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.390181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.390400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.390408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.390415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.390422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.403419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.403977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.404015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.404028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.404270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.404492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.404501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.404509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.404517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.417263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.417958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.417996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.418009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.418250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.418473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.418482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.418494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.418502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.431253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.431973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.432012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.432024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.432267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.432490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.432499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.432506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.432514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 4769.00 IOPS, 18.63 MiB/s [2024-11-19T10:25:14.541Z] [2024-11-19 11:25:14.445061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.445622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.445660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.445672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.445921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.446144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.446152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.446160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.446168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.458989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.459686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.189 [2024-11-19 11:25:14.459723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.189 [2024-11-19 11:25:14.459734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.189 [2024-11-19 11:25:14.459979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.189 [2024-11-19 11:25:14.460202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.189 [2024-11-19 11:25:14.460210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.189 [2024-11-19 11:25:14.460218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.189 [2024-11-19 11:25:14.460226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.189 [2024-11-19 11:25:14.472979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.189 [2024-11-19 11:25:14.473519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.190 [2024-11-19 11:25:14.473556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.190 [2024-11-19 11:25:14.473568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.190 [2024-11-19 11:25:14.473806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.190 [2024-11-19 11:25:14.474043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.190 [2024-11-19 11:25:14.474054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.190 [2024-11-19 11:25:14.474062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.190 [2024-11-19 11:25:14.474071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.190 [2024-11-19 11:25:14.486809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.190 [2024-11-19 11:25:14.487437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.190 [2024-11-19 11:25:14.487475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.190 [2024-11-19 11:25:14.487486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.190 [2024-11-19 11:25:14.487724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.190 [2024-11-19 11:25:14.487953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.190 [2024-11-19 11:25:14.487962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.190 [2024-11-19 11:25:14.487970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.190 [2024-11-19 11:25:14.487979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.190 [2024-11-19 11:25:14.500774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.190 [2024-11-19 11:25:14.501470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.190 [2024-11-19 11:25:14.501508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.190 [2024-11-19 11:25:14.501519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.190 [2024-11-19 11:25:14.501757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.190 [2024-11-19 11:25:14.501988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.190 [2024-11-19 11:25:14.501997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.190 [2024-11-19 11:25:14.502006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.190 [2024-11-19 11:25:14.502014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.190 [2024-11-19 11:25:14.514755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.190 [2024-11-19 11:25:14.515309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.190 [2024-11-19 11:25:14.515347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.190 [2024-11-19 11:25:14.515363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.190 [2024-11-19 11:25:14.515601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.190 [2024-11-19 11:25:14.515824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.190 [2024-11-19 11:25:14.515832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.190 [2024-11-19 11:25:14.515840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.190 [2024-11-19 11:25:14.515848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.190 [2024-11-19 11:25:14.528644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.190 [2024-11-19 11:25:14.529353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.190 [2024-11-19 11:25:14.529391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.190 [2024-11-19 11:25:14.529402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.190 [2024-11-19 11:25:14.529640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.190 [2024-11-19 11:25:14.529870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.190 [2024-11-19 11:25:14.529880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.190 [2024-11-19 11:25:14.529889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.190 [2024-11-19 11:25:14.529897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.452 [2024-11-19 11:25:14.542446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.452 [2024-11-19 11:25:14.542888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.542909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.542917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.543136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.543355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.543363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.543370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.543377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.556317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.556768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.556784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.556792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.557016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.557245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.557253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.557260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.557267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.570212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.570849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.570893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.570904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.571142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.571365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.571374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.571381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.571389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.584127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.584825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.584871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.584882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.585120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.585343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.585351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.585359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.585367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.598109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.598796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.598834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.598845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.599092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.599316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.599325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.599337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.599346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.612094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.612782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.612820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.612832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.613083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.613307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.613315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.613323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.613331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.626075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.626750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.626788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.626799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.627045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.627268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.627277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.627284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.627292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.640055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.640348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.640366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.640374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.640593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.640812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.640820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.640828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.640835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.653998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.654638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.654676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.654687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.654933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.655157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.655166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.655173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.453 [2024-11-19 11:25:14.655181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.453 [2024-11-19 11:25:14.667969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.453 [2024-11-19 11:25:14.668647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.453 [2024-11-19 11:25:14.668687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.453 [2024-11-19 11:25:14.668697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.453 [2024-11-19 11:25:14.668946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.453 [2024-11-19 11:25:14.669170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.453 [2024-11-19 11:25:14.669179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.453 [2024-11-19 11:25:14.669186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.669194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.681939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.682607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.682645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.682657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.682908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.683131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.683139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.683147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.683155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.695932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.696480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.696499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.696511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.696730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.696954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.696963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.696970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.696977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.709925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.710600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.710638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.710649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.710897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.711120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.711129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.711137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.711145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.723770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.724458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.724496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.724507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.724745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.724976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.724985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.724993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.725001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.737758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.738463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.738501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.738512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.738751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.738986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.738996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.739003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.739011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.751551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.752067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.752106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.752117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.752355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.752577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.752586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.752594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.752603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.765355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.765889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.765927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.765940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.766182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.766404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.766413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.766421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.766429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.779194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.779833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.779878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.779891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.780130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.780353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.780361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.780369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.780382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.454 [2024-11-19 11:25:14.793133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.454 [2024-11-19 11:25:14.793787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.454 [2024-11-19 11:25:14.793825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.454 [2024-11-19 11:25:14.793838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.454 [2024-11-19 11:25:14.794087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.454 [2024-11-19 11:25:14.794310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.454 [2024-11-19 11:25:14.794318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.454 [2024-11-19 11:25:14.794326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.454 [2024-11-19 11:25:14.794334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.717 [2024-11-19 11:25:14.807084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.717 [2024-11-19 11:25:14.807783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.717 [2024-11-19 11:25:14.807821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.717 [2024-11-19 11:25:14.807833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.717 [2024-11-19 11:25:14.808081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.717 [2024-11-19 11:25:14.808305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.717 [2024-11-19 11:25:14.808313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.717 [2024-11-19 11:25:14.808321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.717 [2024-11-19 11:25:14.808329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.717 [2024-11-19 11:25:14.821077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.717 [2024-11-19 11:25:14.821635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.717 [2024-11-19 11:25:14.821654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.717 [2024-11-19 11:25:14.821662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.717 [2024-11-19 11:25:14.821886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.717 [2024-11-19 11:25:14.822107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.717 [2024-11-19 11:25:14.822115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.717 [2024-11-19 11:25:14.822122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.717 [2024-11-19 11:25:14.822129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.717 [2024-11-19 11:25:14.834872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.717 [2024-11-19 11:25:14.835424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.717 [2024-11-19 11:25:14.835441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.717 [2024-11-19 11:25:14.835449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.717 [2024-11-19 11:25:14.835667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.717 [2024-11-19 11:25:14.835901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.717 [2024-11-19 11:25:14.835911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.717 [2024-11-19 11:25:14.835918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.717 [2024-11-19 11:25:14.835925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.717 [2024-11-19 11:25:14.848664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.717 [2024-11-19 11:25:14.849227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.717 [2024-11-19 11:25:14.849244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.717 [2024-11-19 11:25:14.849251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.849469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.849688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.849697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.849704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.849711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.862459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.863136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.863174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.863186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.863424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.863647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.863655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.863663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.863671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.876262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.876822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.876842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.876850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.877082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.877302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.877310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.877317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.877324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.890068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.890652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.890669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.890677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.890902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.891122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.891130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.891137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.891143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.903882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.904360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.904376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.904383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.904602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.904820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.904828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.904836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.904842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.917845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.918276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.918293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.918301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.918520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.918739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.918751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.918759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.918766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.931726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.932275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.932314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.932327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.932567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.932790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.932800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.932808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.932816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.945584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.946218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.946239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.946247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.946466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.946686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.946694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.946701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.946708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.959455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.960171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.960209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.960220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.960458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.960681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.960689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.960698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.960710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.718 [2024-11-19 11:25:14.973268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.718 [2024-11-19 11:25:14.973753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.718 [2024-11-19 11:25:14.973773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.718 [2024-11-19 11:25:14.973781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.718 [2024-11-19 11:25:14.974007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.718 [2024-11-19 11:25:14.974227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.718 [2024-11-19 11:25:14.974235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.718 [2024-11-19 11:25:14.974242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.718 [2024-11-19 11:25:14.974249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.719 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.719 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:06.719 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:06.719 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.719 11:25:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:06.719 [2024-11-19 11:25:14.987199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.719 [2024-11-19 11:25:14.987729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.719 [2024-11-19 11:25:14.987768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.719 [2024-11-19 11:25:14.987780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.719 [2024-11-19 11:25:14.988030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.719 [2024-11-19 11:25:14.988253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.719 [2024-11-19 11:25:14.988261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.719 [2024-11-19 11:25:14.988269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.719 [2024-11-19 11:25:14.988277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.719 [2024-11-19 11:25:15.001023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.719 [2024-11-19 11:25:15.001461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.719 [2024-11-19 11:25:15.001481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.719 [2024-11-19 11:25:15.001489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.719 [2024-11-19 11:25:15.001708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.719 [2024-11-19 11:25:15.001934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.719 [2024-11-19 11:25:15.001951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.719 [2024-11-19 11:25:15.001963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.719 [2024-11-19 11:25:15.001970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.719 [2024-11-19 11:25:15.014924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.719 [2024-11-19 11:25:15.015510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.719 [2024-11-19 11:25:15.015548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.719 [2024-11-19 11:25:15.015561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.719 [2024-11-19 11:25:15.015800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.719 [2024-11-19 11:25:15.016030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.719 [2024-11-19 11:25:15.016042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.719 [2024-11-19 11:25:15.016050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.719 [2024-11-19 11:25:15.016058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.719 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.719 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.719 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.719 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:06.719 [2024-11-19 11:25:15.028804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.719 [2024-11-19 11:25:15.029407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.719 [2024-11-19 11:25:15.029428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.719 [2024-11-19 11:25:15.029436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.719 [2024-11-19 11:25:15.029655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.719 [2024-11-19 11:25:15.029879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.719 [2024-11-19 11:25:15.029888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.719 [2024-11-19 11:25:15.029895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.719 [2024-11-19 11:25:15.029902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.719 [2024-11-19 11:25:15.032907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.719 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.719 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:06.719 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.719 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:06.719 [2024-11-19 11:25:15.042652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.719 [2024-11-19 11:25:15.043206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.719 [2024-11-19 11:25:15.043225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.719 [2024-11-19 11:25:15.043238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.719 [2024-11-19 11:25:15.043459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.719 [2024-11-19 11:25:15.043680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.719 [2024-11-19 11:25:15.043688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.719 [2024-11-19 11:25:15.043695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.719 [2024-11-19 11:25:15.043703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.719 [2024-11-19 11:25:15.056448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.719 [2024-11-19 11:25:15.057113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.719 [2024-11-19 11:25:15.057151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.719 [2024-11-19 11:25:15.057162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.719 [2024-11-19 11:25:15.057401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.719 [2024-11-19 11:25:15.057624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.719 [2024-11-19 11:25:15.057633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.719 [2024-11-19 11:25:15.057640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.719 [2024-11-19 11:25:15.057648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.981 [2024-11-19 11:25:15.070417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.981 [2024-11-19 11:25:15.070858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.981 [2024-11-19 11:25:15.070885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.981 [2024-11-19 11:25:15.070893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.981 [2024-11-19 11:25:15.071113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.981 [2024-11-19 11:25:15.071333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.981 [2024-11-19 11:25:15.071341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.981 [2024-11-19 11:25:15.071348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.981 [2024-11-19 11:25:15.071355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.981 Malloc0 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:06.981 [2024-11-19 11:25:15.084336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.981 [2024-11-19 11:25:15.084874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.981 [2024-11-19 11:25:15.084917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.981 [2024-11-19 11:25:15.084928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.981 [2024-11-19 11:25:15.085167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.981 [2024-11-19 11:25:15.085389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.981 [2024-11-19 11:25:15.085398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.981 [2024-11-19 11:25:15.085406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.981 [2024-11-19 11:25:15.085414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:06.981 [2024-11-19 11:25:15.098167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.981 [2024-11-19 11:25:15.098762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.981 [2024-11-19 11:25:15.098801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9206a0 with addr=10.0.0.2, port=4420 00:31:06.981 [2024-11-19 11:25:15.098812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9206a0 is same with the state(6) to be set 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.981 [2024-11-19 11:25:15.099058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9206a0 (9): Bad file descriptor 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.981 [2024-11-19 11:25:15.099281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.981 [2024-11-19 11:25:15.099290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.981 [2024-11-19 11:25:15.099299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.981 [2024-11-19 11:25:15.099307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:06.981 [2024-11-19 11:25:15.105980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.981 11:25:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 138283 00:31:06.981 [2024-11-19 11:25:15.112053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.981 [2024-11-19 11:25:15.295635] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:08.185 4324.14 IOPS, 16.89 MiB/s [2024-11-19T10:25:17.480Z] 5209.62 IOPS, 20.35 MiB/s [2024-11-19T10:25:18.866Z] 5885.78 IOPS, 22.99 MiB/s [2024-11-19T10:25:19.807Z] 6409.30 IOPS, 25.04 MiB/s [2024-11-19T10:25:20.750Z] 6861.55 IOPS, 26.80 MiB/s [2024-11-19T10:25:21.693Z] 7217.25 IOPS, 28.19 MiB/s [2024-11-19T10:25:22.636Z] 7523.54 IOPS, 29.39 MiB/s [2024-11-19T10:25:23.577Z] 7790.07 IOPS, 30.43 MiB/s 00:31:15.225 Latency(us) 00:31:15.225 [2024-11-19T10:25:23.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.225 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:15.225 Verification LBA range: start 0x0 length 0x4000 00:31:15.225 Nvme1n1 : 15.00 8008.63 31.28 10220.52 0.00 6996.06 583.68 14199.47 00:31:15.225 [2024-11-19T10:25:23.577Z] =================================================================================================================== 00:31:15.225 [2024-11-19T10:25:23.577Z] Total : 8008.63 31.28 10220.52 0.00 6996.06 583.68 14199.47 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:15.486 rmmod nvme_tcp 00:31:15.486 rmmod nvme_fabrics 00:31:15.486 rmmod nvme_keyring 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.486 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 139397 ']' 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 139397 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 139397 ']' 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 139397 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139397 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139397' 00:31:15.487 killing process with pid 139397 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 139397 00:31:15.487 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 139397 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.748 11:25:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.662 11:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.662 00:31:17.662 real 0m28.768s 00:31:17.662 user 1m3.265s 00:31:17.662 sys 0m7.876s 00:31:17.662 11:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.662 11:25:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.662 ************************************ 00:31:17.662 END TEST nvmf_bdevperf 00:31:17.662 ************************************ 00:31:17.662 11:25:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:17.662 11:25:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:17.662 11:25:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.663 11:25:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.663 ************************************ 00:31:17.663 START TEST nvmf_target_disconnect 00:31:17.663 ************************************ 00:31:17.663 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:17.924 * Looking for test storage... 00:31:17.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.924 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:17.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.925 --rc genhtml_branch_coverage=1 00:31:17.925 --rc genhtml_function_coverage=1 00:31:17.925 --rc genhtml_legend=1 00:31:17.925 --rc geninfo_all_blocks=1 00:31:17.925 --rc geninfo_unexecuted_blocks=1 00:31:17.925 00:31:17.925 ' 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:17.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.925 --rc genhtml_branch_coverage=1 00:31:17.925 --rc genhtml_function_coverage=1 00:31:17.925 --rc genhtml_legend=1 00:31:17.925 --rc geninfo_all_blocks=1 00:31:17.925 --rc geninfo_unexecuted_blocks=1 00:31:17.925 00:31:17.925 ' 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:17.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.925 --rc genhtml_branch_coverage=1 00:31:17.925 --rc genhtml_function_coverage=1 00:31:17.925 --rc genhtml_legend=1 00:31:17.925 --rc geninfo_all_blocks=1 00:31:17.925 --rc geninfo_unexecuted_blocks=1 00:31:17.925 00:31:17.925 ' 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:17.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.925 --rc genhtml_branch_coverage=1 00:31:17.925 --rc genhtml_function_coverage=1 00:31:17.925 --rc genhtml_legend=1 00:31:17.925 --rc geninfo_all_blocks=1 00:31:17.925 --rc geninfo_unexecuted_blocks=1 00:31:17.925 00:31:17.925 ' 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.925 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:17.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.926 11:25:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:26.068 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:26.068 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.068 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:26.069 Found net devices under 0000:31:00.0: cvl_0_0 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:26.069 Found net devices under 0000:31:00.1: cvl_0_1 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.069 11:25:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:31:26.069 00:31:26.069 --- 10.0.0.2 ping statistics --- 00:31:26.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.069 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:31:26.069 00:31:26.069 --- 10.0.0.1 ping statistics --- 00:31:26.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.069 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:26.069 ************************************ 00:31:26.069 START TEST nvmf_target_disconnect_tc1 00:31:26.069 ************************************ 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:26.069 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:26.331 [2024-11-19 11:25:34.428944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.331 [2024-11-19 11:25:34.428993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd5cf0 with addr=10.0.0.2, port=4420 00:31:26.331 [2024-11-19 11:25:34.429016] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:26.331 [2024-11-19 11:25:34.429026] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:26.331 [2024-11-19 11:25:34.429033] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:26.331 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:26.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:26.331 Initializing NVMe Controllers 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:26.331 00:31:26.331 real 0m0.098s 00:31:26.331 user 0m0.047s 00:31:26.331 sys 0m0.051s 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:26.331 ************************************ 00:31:26.331 END TEST nvmf_target_disconnect_tc1 00:31:26.331 ************************************ 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:26.331 ************************************ 00:31:26.331 START TEST nvmf_target_disconnect_tc2 00:31:26.331 ************************************ 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=145970 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 145970 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 145970 ']' 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.331 11:25:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:26.331 [2024-11-19 11:25:34.590634] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:31:26.331 [2024-11-19 11:25:34.590682] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.592 [2024-11-19 11:25:34.692050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.592 [2024-11-19 11:25:34.729699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.592 [2024-11-19 11:25:34.729735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.592 [2024-11-19 11:25:34.729743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.592 [2024-11-19 11:25:34.729750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.592 [2024-11-19 11:25:34.729756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.592 [2024-11-19 11:25:34.731364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:26.592 [2024-11-19 11:25:34.731514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:26.592 [2024-11-19 11:25:34.731665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:26.592 [2024-11-19 11:25:34.731666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 Malloc0 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 [2024-11-19 11:25:35.450507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 [2024-11-19 11:25:35.490790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=146157 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:27.163 11:25:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:29.726 11:25:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 145970 00:31:29.727 11:25:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Write completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Write completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Write completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 Read completed with error (sct=0, sc=8) 00:31:29.727 starting I/O failed 00:31:29.727 [2024-11-19 11:25:37.524300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:29.727 [2024-11-19 11:25:37.524757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.524776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.525157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.525184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.525397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.525407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.525737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.525745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.526086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.526094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.526278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.526285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.526609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.526617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.526907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.526915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.527282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.527289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.527597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.527604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.527885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.527892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.528287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.528294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.528544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.528551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.528886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.528894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.528952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.528959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.529268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.529276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.529572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.529579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.529889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.529897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.530208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.530216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.530521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.530530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.530796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.530804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.531144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.531152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.531431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.531438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.531779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.531786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.727 [2024-11-19 11:25:37.532092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.727 [2024-11-19 11:25:37.532100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.727 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.532404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.532411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.532704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.532711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.532969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.532977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.533289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.533296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.533580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.533588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.533890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.533898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.534236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.534243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.534543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.534551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.534897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.534905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.535250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.535257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.535438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.535448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.535747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.535754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.535962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.535971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.536287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.536294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.536605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.536613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.536937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.536945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.537239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.537246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.537475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.537483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.537668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.537676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.538077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.538085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.538385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.538393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.538692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.538700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.538963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.538970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.539279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.539287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.539614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.539621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.539911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.539919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.540191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.540199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.540468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.540475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.540737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.540745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.541042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.541050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.541335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.541343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.541623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.541630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.541896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.541904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.542084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.542092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.542310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.542318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.542627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.542635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.542978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.728 [2024-11-19 11:25:37.542985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.728 qpair failed and we were unable to recover it. 00:31:29.728 [2024-11-19 11:25:37.543283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.543290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.543581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.543588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.543896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.544215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.544221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.544820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.544826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.545079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.545086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.545401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.545407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.545704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.545711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.545751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.545758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.546083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.546090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.546387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.546393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.546654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.546661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.546934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.546942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.547248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.547256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.547442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.547450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.547697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.547703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.548011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.548019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.548340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.548347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.548624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.548631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.548888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.548896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.549240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.549246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.549540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.549546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.549868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.549874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.550152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.550158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.550457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.550465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.550766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.550774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.551024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.551031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.551323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.551329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.551607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.551613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.551911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.551918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.552215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.552222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.552458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.552464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.552766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.552773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.552946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.552952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.553124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.553131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.553427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.553434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.553690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.553696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.729 [2024-11-19 11:25:37.554009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.729 [2024-11-19 11:25:37.554016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.729 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.554331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.554338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.554639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.554645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.554911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.554918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.555208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.555215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.555508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.555515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.555850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.555857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.556261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.556269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.556567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.556575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.556916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.556923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.557240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.557247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.557411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.557418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.557693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.557699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.557941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.557948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.558259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.558265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.558607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.558614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.558923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.558930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.559226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.559233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.559563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.559569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.559864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.559871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.560158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.560165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.560435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.560441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.560777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.560784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.561085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.561093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.561417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.561424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.561723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.561730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.562016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.562023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.562415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.562424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.562747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.562754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.563162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.563169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.563457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.563465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.563714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.563721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.564047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.564054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.564341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.564348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.564525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.564532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.564714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.564722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.565018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.565025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.565322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.730 [2024-11-19 11:25:37.565329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.730 qpair failed and we were unable to recover it. 00:31:29.730 [2024-11-19 11:25:37.565628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.565635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.565822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.565830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.566147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.566155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.566518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.566525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.566816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.566823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.567132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.567140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.567300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.567309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.567639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.567647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.567930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.567937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.568261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.568268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.568569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.568575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.568863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.568871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.569182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.569188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.569494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.569501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.569825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.569832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.570169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.570176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.570481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.570488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.570803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.570809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.570979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.570986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.571293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.571300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.571677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.571683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.571959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.571966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.572284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.572290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.572588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.572595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.572768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.572775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.572999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.573006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.573300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.573306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.573492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.573500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.573790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.573804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.574102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.574111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.574422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.574429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.574712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.731 [2024-11-19 11:25:37.574719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.731 qpair failed and we were unable to recover it. 00:31:29.731 [2024-11-19 11:25:37.575006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.575013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.575331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.575338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.575649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.575655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.575906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.575913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.576196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.576202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.576501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.576507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.576828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.576835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.577139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.577146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.577468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.577476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.577783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.577790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.578096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.578103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.578414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.578422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.578717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.578724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.579004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.579011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.579195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.579201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.579486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.579493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.579813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.579819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.580130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.580138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.580433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.580440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.580749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.580756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.581066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.581074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.581391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.581398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.581609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.581616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.581924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.581930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.582237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.582244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.582539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.582546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.582920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.582927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.583253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.583259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.583553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.583560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.583865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.583873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.584173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.584180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.584484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.584491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.584643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.584651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.584735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.584741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.585008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.585016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.585323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.585330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.585639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.585646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.732 qpair failed and we were unable to recover it. 00:31:29.732 [2024-11-19 11:25:37.585954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.732 [2024-11-19 11:25:37.585962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.586287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.586294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.586600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.586607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.587006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.587013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.587329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.587519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.587526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.587825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.587833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.588131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.588139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.588477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.588484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.588733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.588741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.589069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.589075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.589235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.589243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.589501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.589516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.589829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.589836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.590149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.590157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.590469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.590476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.590769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.590776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.590999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.591006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.591258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.591265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.591616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.591627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.591826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.591833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.592106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.592114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.592435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.592441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.592751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.592758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.593030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.593037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.593332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.593340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.593640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.593646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.593976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.593984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.594302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.594310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.594610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.594616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.594804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.594810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.595107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.595114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.595441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.595447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.595767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.595774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.596100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.596107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.596429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.596436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.596743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.596751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.597047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.733 [2024-11-19 11:25:37.597055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.733 qpair failed and we were unable to recover it. 00:31:29.733 [2024-11-19 11:25:37.597348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.597355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.597673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.597681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.597973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.597982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.598289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.598297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.598603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.598609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.598909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.598916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.599236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.599243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.599521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.599529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.599817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.599824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.600119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.600127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.600435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.600441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.600743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.600751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.601062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.601069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.601379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.601386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.601572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.601579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.601850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.601857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.602157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.602164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.602493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.602500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.602845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.602852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.603182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.603189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.603496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.603503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.603808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.603815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.604104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.604111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.604411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.604418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.604727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.604734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.605028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.605035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.605323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.605331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.605637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.605644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.605956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.605963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.606281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.606288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.606598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.606605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.606808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.606815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.607121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.607129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.607443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.607450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.607763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.607770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.608064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.608071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.608281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.608289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.608567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.734 [2024-11-19 11:25:37.608574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.734 qpair failed and we were unable to recover it. 00:31:29.734 [2024-11-19 11:25:37.608899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.608907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.609211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.609218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.609549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.609556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.609870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.609877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.610167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.610176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.610488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.610495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.610805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.610813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.611124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.611133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.611437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.611443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.611608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.611616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.611998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.612005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.612335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.612342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.612641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.612648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.612957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.612964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.613283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.613290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.613595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.613602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.613895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.613902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.614220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.614226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.614511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.614519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.614825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.614832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.615119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.615127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.615424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.615431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.615723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.615736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.616031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.616038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.616326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.616334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.616643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.616649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.616835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.616842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.617228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.617235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.617542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.617549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.617863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.617870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.618058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.618065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.618436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.618442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.618759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.618766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.619076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.619083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.619367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.619375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.619688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.619695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.619858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.619869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.735 [2024-11-19 11:25:37.620157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.735 [2024-11-19 11:25:37.620164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.735 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.620471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.620477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.620651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.620659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.620977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.620984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.621205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.621212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.621541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.621547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.621840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.621855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.622164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.622173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.622473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.622480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.622788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.622794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.623108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.623115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.623413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.623420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.623727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.623734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.624030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.624037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.624386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.624393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.624740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.624747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.625071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.625078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.625241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.625248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.625452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.625459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.625773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.625780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.626078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.626085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.626393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.626401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.626618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.626625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.626932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.626939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.627332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.627339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.627654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.627661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.627857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.627866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.628174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.628181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.628476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.628482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.628772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.628779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.629098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.629105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.629418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.629425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.629625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.629631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.736 qpair failed and we were unable to recover it. 00:31:29.736 [2024-11-19 11:25:37.629903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.736 [2024-11-19 11:25:37.629910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.630237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.630244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.630536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.630543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.630865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.630872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.631148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.631156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.631481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.631488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.631692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.631699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.632019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.632027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.632332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.632339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.632622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.632629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.632954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.632961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.633169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.633176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.633443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.633450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.633760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.633768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.633945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.633955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.634260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.634267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.634546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.634553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.634878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.634885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.635035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.635043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.635316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.635324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.635617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.635624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.635938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.635945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.636304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.636311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.636490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.636497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.636780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.636787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.636962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.636970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.637249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.637255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.637567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.637573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.637887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.637894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.638252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.638258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.638512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.638519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.638831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.638838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.639137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.639145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.639446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.639452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.639821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.639827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.640047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.640054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.640357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.640364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.640690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.737 [2024-11-19 11:25:37.640697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.737 qpair failed and we were unable to recover it. 00:31:29.737 [2024-11-19 11:25:37.641007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.641015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.641300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.641306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.641644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.641650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.641945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.641952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.642239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.642246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.642597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.642603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.642891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.642898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.643211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.643217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.643385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.643392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.643707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.643714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.643999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.644007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.644356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.644363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.644646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.644653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.644957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.644964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.645272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.645278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.645575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.645581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.645888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.645896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.646213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.646220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.646585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.646592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.646784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.646790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.647141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.647148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.647443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.647450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.647765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.647771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.647927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.647935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.648206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.648213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.648398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.648405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.648663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.648670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.648891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.648899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.649210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.649216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.649493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.649500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.649831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.649838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.650022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.650029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.650324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.650331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.650506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.650513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.650778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.650785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.650954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.650961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.651300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.738 [2024-11-19 11:25:37.651307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.738 qpair failed and we were unable to recover it. 00:31:29.738 [2024-11-19 11:25:37.651618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.651625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.651932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.651939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.652259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.652266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.652577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.652584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.652906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.652913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.653252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.653259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.653580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.653587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.653878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.653886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.654186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.654193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.654492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.654499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.654806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.654812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.655085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.655092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.655409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.655416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.655727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.655734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.656066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.656073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.656384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.656390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.656705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.656711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.657002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.657016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.657376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.657383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.657698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.657706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.658002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.658009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.658296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.658303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.658616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.658623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.658921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.658928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.659250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.659256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.659451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.659458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.659731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.659738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.660031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.660038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.660343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.660350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.660681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.660688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.660995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.661002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.661309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.661316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.661605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.661612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.661937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.661944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.662236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.662245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.662534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.662542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.662853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.662861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.739 [2024-11-19 11:25:37.663173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.739 [2024-11-19 11:25:37.663181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.739 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.663475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.663482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.663673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.663681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.664017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.664024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.664322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.664330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.664636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.664643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.664823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.664830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.665143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.665150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.665357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.665364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.665695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.665701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.665987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.665995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.666166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.666173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.666460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.666467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.666789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.666796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.667113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.667121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.667472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.667479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.667787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.667794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.668103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.668111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.668417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.668425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.668716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.668722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.669030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.669038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.669206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.669213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.669437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.669446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.669823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.669829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.670100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.670107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.670465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.670471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.670797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.670803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.671122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.671129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.671421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.671428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.671743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.671751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.671907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.671914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.672210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.672217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.672569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.672575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.672782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.672789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.740 qpair failed and we were unable to recover it. 00:31:29.740 [2024-11-19 11:25:37.673118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.740 [2024-11-19 11:25:37.673125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.673414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.673422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.673776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.673783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.674148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.674155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.674362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.674368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.674666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.674672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.674981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.674988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.675187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.675193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.675372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.675379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.675554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.675561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.675825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.675832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.676157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.676164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.676456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.676463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.676726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.676733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.677030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.677038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.677244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.677251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.677559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.677566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.677885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.677892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.678246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.678252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.678395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.678401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.678864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.678872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.679159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.679166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.679490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.679498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.679852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.679860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.680171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.680178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.680462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.680470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.680778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.680786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.680958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.680966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.681261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.681271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.681537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.681545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.681848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.681856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.682137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.682145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.682416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.682423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.682731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.682738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.683047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.683054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.683232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.683239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.683483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.683491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.683824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.741 [2024-11-19 11:25:37.683831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.741 qpair failed and we were unable to recover it. 00:31:29.741 [2024-11-19 11:25:37.684122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.684130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.684437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.684443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.684721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.684729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.685030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.685037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.685338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.685345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.685658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.685665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.685980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.685987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.686197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.686203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.686485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.686492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.686817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.686824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.687110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.687117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.687438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.687445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.687734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.687742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.688082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.688089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.688393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.688408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.688715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.688723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.689043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.689051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.689355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.689363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.689675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.689682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.689992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.689999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.690207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.690215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.690493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.690501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.690811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.690818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.691118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.691127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.691423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.691430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.691732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.691739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.692074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.692082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.692327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.692334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.692671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.692679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.692991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.692999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.693298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.693304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.693620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.693628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.693941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.693951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.694251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.694259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.694566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.694573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.694762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.694769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.695067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.695074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.742 [2024-11-19 11:25:37.695292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.742 [2024-11-19 11:25:37.695299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.742 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.695631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.695639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.695969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.695978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.696312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.696319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.696629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.696637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.696829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.696835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.697186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.697193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.697496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.697503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.697666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.697674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.698022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.698029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.698343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.698350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.698528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.698535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.698727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.698735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.699025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.699033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.699348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.699356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.699668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.699675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.699990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.699997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.700318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.700325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.700631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.700637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.700933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.700941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.701156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.701165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.701433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.701440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.701751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.701758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.701977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.701984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.702827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.702843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.703152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.703160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.703468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.703475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.703520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.703528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.703936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.703944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.704252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.704259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.704447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.704453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.704779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.704785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.704974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.704982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.705347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.705354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.705512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.705519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.705857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.705867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.706205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.706212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.706547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.706555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.743 qpair failed and we were unable to recover it. 00:31:29.743 [2024-11-19 11:25:37.706865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.743 [2024-11-19 11:25:37.706872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.707158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.707166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.707459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.707465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.707769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.707775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.707978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.707986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.708299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.708305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.708594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.708601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.708791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.708798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.709101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.709109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.709343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.709349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.709637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.709643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.709929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.709937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.710274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.710281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.710560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.710567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.710794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.710800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.711064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.711071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.711405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.711411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.711694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.711702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.712060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.712067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.712414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.712421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.712723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.712729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.713043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.713051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.713361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.713370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.713740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.713747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.714059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.714066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.714357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.714364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.714692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.714700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.715015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.715022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.715342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.715349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.715745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.715752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.716117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.716126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.716466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.716474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.716789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.716796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.717155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.717162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.717458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.717464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.717657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.717664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.717961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.717968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.718174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.744 [2024-11-19 11:25:37.718181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.744 qpair failed and we were unable to recover it. 00:31:29.744 [2024-11-19 11:25:37.718460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.718468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.718774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.718781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.719081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.719089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.719406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.719413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.719722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.719728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.720036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.720044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.720251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.720258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.720590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.720597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.720795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.720802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.721156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.721163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.721199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.721205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.721484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.721491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.721813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.721820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.722083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.722090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.722417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.722425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.722633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.722639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.722950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.722957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.723316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.723324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.723641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.723648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.723936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.723944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.724295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.724301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.724594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.724601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.724921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.724928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.725335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.725343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.725665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.725674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.725977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.725984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.726231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.726239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.726539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.726546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.726860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.726869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.727181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.727188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.727501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.727508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.727818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.727825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.728136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.728144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.745 [2024-11-19 11:25:37.728471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.745 [2024-11-19 11:25:37.728477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.745 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.728765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.728772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.728984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.728991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.729302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.729308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.729673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.729679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.730020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.730028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.730248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.730255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.730520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.730527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.730708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.730716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.730824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.730831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.731136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.731143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.731426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.731434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.731734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.731741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.731947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.731954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.732273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.732279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.732467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.732474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.732787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.732794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.733037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.733044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.733332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.733340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.733654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.733661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.733965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.733972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.734279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.734286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.734655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.734662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.734927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.734934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.735259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.735265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.735583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.735590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.735986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.735993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.736265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.736272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.736497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.736503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.736839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.736846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.737152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.737159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.737472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.737483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.737674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.737681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.738032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.738040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.738344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.738352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.738515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.738522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.738809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.738815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.739141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.739148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.746 [2024-11-19 11:25:37.739515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.746 [2024-11-19 11:25:37.739521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.746 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.739685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.739691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.740017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.740025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.740354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.740360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.740671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.740677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.740978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.740985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.741314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.741321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.741607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.741614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.741808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.741815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.742131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.742139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.742347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.742354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.742570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.742576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.742893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.742900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.743118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.743125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.743331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.743337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.743539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.743545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.743836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.743843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.744131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.744138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.744323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.744331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.744652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.744658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.744926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.744933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.745164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.745171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.745448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.745454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.745745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.745751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.745967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.745974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.746278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.746285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.746602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.746608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.746912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.746919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.747241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.747248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.747559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.747566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.747773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.747780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.748059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.748066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.748387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.748394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.748601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.748609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.748921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.748928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.749279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.749286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.749594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.749602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.749807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.747 [2024-11-19 11:25:37.749813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.747 qpair failed and we were unable to recover it. 00:31:29.747 [2024-11-19 11:25:37.750112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.750119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.750432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.750439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.750748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.750755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.751048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.751055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.751368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.751374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.751660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.751667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.752007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.752014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.752304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.752311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.752646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.752653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.752934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.752941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.753278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.753285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.753601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.753608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.753795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.753802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.754114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.754122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.754450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.754457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.754749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.754757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.755097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.755104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.755405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.755412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.755721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.755728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.756042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.756049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.756356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.756363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.756677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.756683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.756982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.756989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.757172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.757179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.757492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.757498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.757719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.757726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.757913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.757920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.758202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.758209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.758483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.758489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.758801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.758807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.759124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.759132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.759339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.759346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.759666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.759673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.759891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.759899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.760229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.760235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.760557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.760566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.760761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.760767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.748 qpair failed and we were unable to recover it. 00:31:29.748 [2024-11-19 11:25:37.761077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.748 [2024-11-19 11:25:37.761084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.761264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.761271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.761504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.761510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.761793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.761799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.762186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.762193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.762535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.762541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.762884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.762891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.763204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.763211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.763528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.763534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.763852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.763859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.764028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.764036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.764367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.764374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.764546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.764553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.764872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.764880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.765178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.765186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.765495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.765501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.765807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.765815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.766137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.766145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.766458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.766465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.766669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.766675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.766903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.766910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.767208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.767214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.767536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.767543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.767822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.767829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.768173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.768180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.768453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.768460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.768783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.768789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.769097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.769104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.769436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.769444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.769754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.769761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.769915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.769923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.770079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.770086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.770378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.770385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.770567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.770574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.770759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.770767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.771035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.771043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.771347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.771354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.749 [2024-11-19 11:25:37.771681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.749 [2024-11-19 11:25:37.771687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.749 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.772034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.772043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.772319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.772326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.772637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.772644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.772929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.772936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.773159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.773166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.773445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.773451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.773668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.773674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.774012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.774019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.774420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.774427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.774602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.774609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.774923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.774930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.775154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.775160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.775243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.775250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.775562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.775569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.775780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.775787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.776090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.776097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.776464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.776472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.776693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.776700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.776864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.776872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.777168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.777175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.777352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.777359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.777708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.777714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.778022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.778029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.778328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.778335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.778624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.778631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.778938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.778946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.779322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.779329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.779633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.779640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.779794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.779802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.780121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.780128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.780296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.780303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.780630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.780636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.781052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.781059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.750 qpair failed and we were unable to recover it. 00:31:29.750 [2024-11-19 11:25:37.781294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.750 [2024-11-19 11:25:37.781301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.781626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.781633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.781964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.781971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.782313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.782329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.782640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.782647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.782939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.782947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.783263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.783270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.783591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.783599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.783911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.783919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.784219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.784233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.784543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.784550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.784902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.784909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.785088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.785097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.785412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.785419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.785720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.785726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.785897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.785905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.786008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.786015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.786305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.786313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.786648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.786655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.786881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.786888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.787183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.787190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.787472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.787479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.787703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.787709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.788079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.788087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.788306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.788314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.788625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.788632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.788996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.789004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.789303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.789310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.789602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.789609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.789798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.789806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.789981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.789989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.790252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.790259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.790606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.790614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.791004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.791011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.791319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.791326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.791533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.791540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.791685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.791693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.792072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.792079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.751 qpair failed and we were unable to recover it. 00:31:29.751 [2024-11-19 11:25:37.792399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.751 [2024-11-19 11:25:37.792407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.792601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.792609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.792929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.792936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.793264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.793271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.793559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.793565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.793780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.793787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.794069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.794078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.794282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.794290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.794562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.794569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.794874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.794886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.795246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.795253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.795564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.795570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.795770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.795777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.795989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.795996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.796298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.796304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.796488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.796496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.796692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.796699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.796879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.796886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.797116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.797123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.797469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.797476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.797808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.797815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.798013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.798021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.798255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.798263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.798456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.798462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.798801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.798808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.799103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.799111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.799427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.799434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.799745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.799751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.800037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.800044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.800246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.800252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.800568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.800575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.800896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.800903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.801253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.801260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.801569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.801576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.801775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.801782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.802093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.802100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.802409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.802416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.802753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.752 [2024-11-19 11:25:37.802759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.752 qpair failed and we were unable to recover it. 00:31:29.752 [2024-11-19 11:25:37.803047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.803054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.803376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.803384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.803696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.803704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.804033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.804039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.804359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.804366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.804572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.804579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.804867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.804874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.805190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.805197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.805402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.805410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.805814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.805821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.806167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.806175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.806266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.806275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.806557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.806564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.806733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.806740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.807019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.807026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.807188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.807194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.807514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.807521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.807846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.807853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.808147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.808155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.808480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.808487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.808782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.808789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.809090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.809097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.809420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.809427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.809734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.809740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.810045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.810058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.810378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.810385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.810560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.810566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.810954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.810962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.811251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.811259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.811571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.811578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.811741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.811748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.811945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.811952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.812136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.812144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.812441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.812448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.812809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.812816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.813155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.813162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.813449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.813456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.753 [2024-11-19 11:25:37.813669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.753 [2024-11-19 11:25:37.813676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.753 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.813998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.814005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.814394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.814401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.814688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.814695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.815004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.815012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.815247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.815253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.815570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.815577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.815917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.815924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.816240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.816248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.816439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.816446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.816732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.816740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.817062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.817069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.817383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.817389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.817580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.817586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.817967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.817977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.818298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.818305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.818654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.818661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.819057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.819065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.819339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.819345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.819521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.819528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.819828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.819835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.820200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.820207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.820543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.820550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.820728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.820736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.821118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.821125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.821443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.821450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.821654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.821661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.821845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.821852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.822203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.822210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.822498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.822504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.822792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.822800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.822986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.822993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.823178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.823185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.823505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.754 [2024-11-19 11:25:37.823512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.754 qpair failed and we were unable to recover it. 00:31:29.754 [2024-11-19 11:25:37.823799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.823806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.824080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.824088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.824278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.824286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.824456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.824464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.824681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.824689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.824957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.824964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.825313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.825320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.825614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.825621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.825808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.825815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.826175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.826182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.826510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.826517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.826836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.826843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.827133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.827140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.827437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.827444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.827741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.827749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.828049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.828056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.828381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.828388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.828532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.828538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.828834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.828841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.829155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.829162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.829558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.829567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.829908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.829916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.830210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.830217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.830547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.830554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.830954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.830961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.831294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.831301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.831629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.831636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.831853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.831866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.832184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.832191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.832494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.832501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.832805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.832811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.833180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.833187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.833498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.833505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.833703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.833710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.834104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.834112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.834421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.834427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.834723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.834731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.755 [2024-11-19 11:25:37.835036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.755 [2024-11-19 11:25:37.835043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.755 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.835362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.835369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.835689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.835696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.836004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.836011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.836335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.836341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.836422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.836429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.836742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.836749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.836919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.836926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.837126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.837133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.837321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.837328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.837608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.837616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.837987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.837994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.838308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.838316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.838522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.838529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.838851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.838858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.839169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.839175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.839303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.839310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.839606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.839613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.839885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.839893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.840165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.840171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.840460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.840468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.840642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.840650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.840927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.840934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.841236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.841243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.841554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.841561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.841753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.841760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.841997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.842005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.842311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.842318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.842622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.842636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.842956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.842964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.843262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.843270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.843578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.843585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.843905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.843913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.844221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.844228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.844422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.844429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.844622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.844629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.844786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.844794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.845104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.845112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.756 [2024-11-19 11:25:37.845328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.756 [2024-11-19 11:25:37.845335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.756 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.845371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.845378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.845541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.845548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.845736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.845744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.845794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.845801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.846104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.846110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.846389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.846396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.846689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.846695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.846900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.846908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.847148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.847155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.847435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.847442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.847753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.847760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.847920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.847928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.848113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.848120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.848319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.848326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.848524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.848531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.848851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.848860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.849171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.849178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.849474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.849481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.849681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.849688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.850010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.850017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.850335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.850343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.850524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.850531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.850829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.850837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.851145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.851152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.851447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.851454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.851757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.851764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.852075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.852082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.852404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.852411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.852746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.852753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.853121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.853128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.853406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.853412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.853626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.853633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.853814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.853821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.854106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.854114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.854478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.854485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.854678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.854685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.854892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.854899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.757 qpair failed and we were unable to recover it. 00:31:29.757 [2024-11-19 11:25:37.855174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.757 [2024-11-19 11:25:37.855182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.855477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.855485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.855779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.855786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.855961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.855968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Read completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 Write completed with error (sct=0, sc=8) 00:31:29.758 starting I/O failed 00:31:29.758 [2024-11-19 11:25:37.856682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:29.758 [2024-11-19 11:25:37.857109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.857220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.857527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.857564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.857887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.857899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.858239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.858248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.858619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.858626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.858973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.858981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.859198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.859206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.859517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.859525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.859840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.859846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.860005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.860012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.860389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.860395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.860605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.860612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.860970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.860977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.861297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.861304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.861588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.861595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.861919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.861927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.862149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.862156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.862452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.862458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.862803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.862810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.863108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.863116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.863442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.863449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.863755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.758 [2024-11-19 11:25:37.863761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.758 qpair failed and we were unable to recover it. 00:31:29.758 [2024-11-19 11:25:37.864104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.864111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.864407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.864414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.864716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.864722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.865038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.865045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.865415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.865422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.865609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.865616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.865935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.865942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.866175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.866182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.866397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.866405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.866717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.866725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.867058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.867065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.867250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.867257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.867580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.867586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.867881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.867888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.868139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.868146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.868441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.868448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.868807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.868813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.869100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.869107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.869407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.869414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.869733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.869748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.870051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.870057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.870336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.870344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.870645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.870652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.870931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.870938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.871246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.871253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.871418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.871425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.871596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.871604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.871903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.871910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.872365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.872372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.872661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.872669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.872885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.872892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.873188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.873194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.873524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.873530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.759 qpair failed and we were unable to recover it. 00:31:29.759 [2024-11-19 11:25:37.873842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.759 [2024-11-19 11:25:37.873848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.873960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.873967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.874208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.874215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.874519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.874526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.874845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.874852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.875161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.875169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.875474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.875482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.875700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.875707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.875888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.875897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.876136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.876143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.876315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.876322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.876647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.876654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.877020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.877027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.877402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.877409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.877718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.877725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.878114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.878121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.878419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.878426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.878753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.878760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.879078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.879086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.879399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.879405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.879697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.879704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.880044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.880051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.880346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.880352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.880713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.880719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.881036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.881043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.881388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.881395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.881701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.881708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.881921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.881934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.882262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.882270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.882569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.882576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.882925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.882933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.883258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.883265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.883428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.883435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.883804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.883811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.884107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.884115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.884418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.884424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.884624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.884631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.884974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.760 [2024-11-19 11:25:37.884982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.760 qpair failed and we were unable to recover it. 00:31:29.760 [2024-11-19 11:25:37.885153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.885160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.885466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.885473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.885798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.885804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.886019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.886026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.886331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.886338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.886648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.886655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.886966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.886973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.887302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.887309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.887609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.887616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.887928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.887935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.888311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.888317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.888626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.888633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.888950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.888957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.889273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.889280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.889601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.889608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.889825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.889831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.890152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.890160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.890494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.890501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.890781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.890789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.891108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.891115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.891456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.891463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.891724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.891731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.892030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.892037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.892357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.892364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.892673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.892680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.892993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.893001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.893318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.893326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.893618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.893626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.893932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.893939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.894247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.894254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.894613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.894622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.894833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.894839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.895123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.895458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.895465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.895774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.895781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.896085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.896093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.896308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.896314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.761 qpair failed and we were unable to recover it. 00:31:29.761 [2024-11-19 11:25:37.896624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.761 [2024-11-19 11:25:37.896631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.896989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.896995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.897161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.897169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.897500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.897506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.897823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.897830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.898130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.898138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.898449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.898455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.898644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.898651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.898889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.898897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.899182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.899189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.899482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.899489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.899787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.899795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.900105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.900112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.900422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.900428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.900742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.900749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.901022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.901029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.901357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.901364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.901756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.901764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.902063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.902072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.902370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.902378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.902717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.902725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.902935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.902942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.903240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.903247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.903543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.903549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.903860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.903870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.904193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.904200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.904468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.904475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.904798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.904804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.904966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.904973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.905255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.905263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.905576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.905583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.905914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.905921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.906209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.906217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.906416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.906424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.906733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.906740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.907047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.907054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.907361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.907369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.907669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.762 [2024-11-19 11:25:37.907677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.762 qpair failed and we were unable to recover it. 00:31:29.762 [2024-11-19 11:25:37.907978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.907984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.908228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.908236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.908590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.908597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.908764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.908771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.908964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.908971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.909128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.909136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.909526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.909533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.909803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.909809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.910073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.910080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.910411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.910418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.910755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.910762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.911629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.911648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.911927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.911936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.912130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.912138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.912429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.912435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.912633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.912640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.912942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.912949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.913333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.913341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.913538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.913544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.913810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.913818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.914120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.914126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.914502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.914510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.914815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.914822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.915137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.915144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.915530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.915537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.915823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.915830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.916146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.916154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.916430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.916437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.916759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.916767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.917075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.917082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.917406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.917413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.917722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.917730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.917915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.917922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.763 qpair failed and we were unable to recover it. 00:31:29.763 [2024-11-19 11:25:37.918199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.763 [2024-11-19 11:25:37.918205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.918495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.918501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.918826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.918834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.919127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.919134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.919458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.919465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.919762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.919769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.920080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.920088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.920398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.920405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.920578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.920585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.920910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.920917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.921206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.921213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.921519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.921526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.921837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.921843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.922210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.922217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.922502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.922517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.922819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.922826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.923141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.923149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.923506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.923513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.923729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.923736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.923935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.923949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.924251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.924258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.924558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.924565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.924875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.924883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.925216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.925224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.925535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.925543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.925880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.925887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.926195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.926202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.926488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.926496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.926685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.926692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.926997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.927005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.927321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.927328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.927595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.927602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.927910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.927917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.928233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.928240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.764 qpair failed and we were unable to recover it. 00:31:29.764 [2024-11-19 11:25:37.928554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.764 [2024-11-19 11:25:37.928561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.928865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.928874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.929206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.929213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.929511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.929519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.929716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.929723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.930024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.930032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.930340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.930346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.930635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.930641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.930970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.930979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.931202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.931209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.931411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.931419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.931747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.931754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.932031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.932038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.932333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.932339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.932646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.932653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.932964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.932972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.933280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.933287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.933594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.933600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.933911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.933918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.934125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.934133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.934440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.934446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.934755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.934762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.935086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.935093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.935385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.935392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.935601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.935608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.935873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.935880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.936235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.936242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.936542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.936549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.936856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.936866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.937051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.937057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.937386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.937393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.937687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.937694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.937998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.938005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.938317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.938324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.938637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.938644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.938960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.938968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.939298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.939304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.765 [2024-11-19 11:25:37.939611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.765 [2024-11-19 11:25:37.939618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.765 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.939927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.939933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.940319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.940325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.940607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.940614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.940810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.940817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.941152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.941159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.941451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.941458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.941776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.941782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.942079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.942086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.942293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.942300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.942673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.942679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.942973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.942982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.943299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.943306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.943494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.943501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.943780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.943788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.944592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.944609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.944892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.944907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.945235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.945242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.945531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.945538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.945856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.945865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.946160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.946167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.946477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.946484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.946676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.946684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.946869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.946878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.947054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.947062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.947337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.947344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.947707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.947713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.948017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.948024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.948329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.948659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.948666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.948894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.948901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.949219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.949225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.949546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.949553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.949904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.949911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.950187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.950194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.950516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.950522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.950834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.950841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.951140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.766 [2024-11-19 11:25:37.951148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.766 qpair failed and we were unable to recover it. 00:31:29.766 [2024-11-19 11:25:37.951439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.951447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.951757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.951763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.952035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.952042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.952359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.952366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.952650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.952658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.952968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.952975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.953264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.953272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.953578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.953586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.953895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.953904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.954227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.954234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.954545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.954552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.954860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.954871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.955188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.955196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.955514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.955520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.955805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.955813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.956123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.956130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.956458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.956465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.956774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.956781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.957111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.957118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.957430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.957437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.957605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.957612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.957905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.957912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.958201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.958208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.958600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.958606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.958834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.958842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.959029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.959036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.959205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.959212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.959542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.959549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.959826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.959832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.960035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.960043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.960309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.960316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.960513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.960519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.960851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.960858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.961148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.961155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.961455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.767 [2024-11-19 11:25:37.961463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.767 qpair failed and we were unable to recover it. 00:31:29.767 [2024-11-19 11:25:37.961771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.961779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.961989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.961996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.962300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.962307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.962478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.962486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.962797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.962805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.963130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.963140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.963300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.963308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.963624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.963631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.963937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.963944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.964261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.964268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.964575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.964582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.964789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.964796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.965091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.965098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.965398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.965406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.965755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.965762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.966050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.966057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.966357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.966364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.966650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.966658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.966985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.966991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.967285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.967292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.967599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.967605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.967968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.967976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.968294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.968300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.968612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.968619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.968829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.968835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.969138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.969145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.969465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.969471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.969790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.969797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.970109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.970116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.970450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.970457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.768 [2024-11-19 11:25:37.970643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.768 [2024-11-19 11:25:37.970651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.768 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.970874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.970881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.971154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.971161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.971306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.971312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.971603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.971610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.971925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.971933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.972227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.972235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.972394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.972401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.972688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.972695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.972871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.972878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.973173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.973180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.973477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.973485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.973792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.973800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.974105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.974112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.974414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.974420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.974710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.974718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.974919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.974926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.975193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.975199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.975515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.975522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.975815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.975822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.976134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.976142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.976434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.976442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.976753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.976759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.977123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.977131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.977441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.977447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.977838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.977844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.978023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.978032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.978318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.978326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.978641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.978649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.979400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.979416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.979612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.979621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.980388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.980403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.980688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.980705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.981048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.981055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.981228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.981236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.981548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.981554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.981868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.981875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.982159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.982166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.982482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.982490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.982788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.769 [2024-11-19 11:25:37.982795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.769 qpair failed and we were unable to recover it. 00:31:29.769 [2024-11-19 11:25:37.983083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.983091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.983390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.983397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.983586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.983593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.983885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.983892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.984173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.984179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.984489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.984496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.984805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.984812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.985106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.985114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.985422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.985429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.985738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.985745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.985960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.985973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.986283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.986290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.986588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.986596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.986919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.986926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.987252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.987259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.987475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.987484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.987757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.987765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.988083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.988091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.988389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.988396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.988721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.988729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.989029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.989036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.989322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.989330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.989624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.989631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.989932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.989940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.990256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.990264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.990454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.990462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.990731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.990738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.991031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.991038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.991336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.991344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.991656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.991662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.991975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.991982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.992312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.992319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.992626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.992633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.992809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.992817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.993119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.993126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.993431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.993438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.993760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.770 [2024-11-19 11:25:37.993767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.770 qpair failed and we were unable to recover it. 00:31:29.770 [2024-11-19 11:25:37.994075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.994083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.994393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.994399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.994642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.994649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.994964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.994971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.995257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.995264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.995565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.995572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.995866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.995873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.996193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.996200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.996365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.996373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.996580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.996587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.996899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.996907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.997213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.997219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.997528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.997535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.997843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.997850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.998145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.998152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.998473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.998480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.998764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.998779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.999081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.999088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.999427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.999436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.999619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.999627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:37.999903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:37.999911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.000200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.000207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.000418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.000425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.000743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.000750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.000934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.000941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.001245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.001252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.001562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.001569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.001853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.001865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.002066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.002073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.002369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.002375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.002679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.002686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.003000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.003007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.003327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.003334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.003641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.003648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.003937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.003944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.004240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.004247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.004553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.004560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.004858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.004870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.005148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.005156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.005483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.005491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.771 [2024-11-19 11:25:38.005732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.771 [2024-11-19 11:25:38.005739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.771 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.006028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.006036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.006360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.006367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.006656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.006663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.006972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.006980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.007363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.007370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.007657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.007663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.007838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.007844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.008211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.008218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.008528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.008535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.008891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.008898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.009156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.009163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.009541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.009547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.009753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.009760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.010086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.010094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.010408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.010415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.010738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.010744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.010961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.010968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.011269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.011277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.011591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.011598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.011910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.011918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.012213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.012227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.012567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.012574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.012781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.012789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.013097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.013104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.013413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.013421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.013710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.013717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.013911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.013918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.014233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.014241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.014551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.014558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.014866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.014874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.015059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.015065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.015332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.015340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.015647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.015654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.015953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.015960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.016280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.016287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.016567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.016574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.016883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.016890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.017188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.017195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.017365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.017371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.017683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.017690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.772 qpair failed and we were unable to recover it. 00:31:29.772 [2024-11-19 11:25:38.017900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.772 [2024-11-19 11:25:38.017907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.018181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.018189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.018508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.018515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.018866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.018874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.019154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.019161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.019480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.019488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.019794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.019800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.020099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.020106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.020431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.020438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.020750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.020756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.020990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.020997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.021320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.021326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.021636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.021642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.021939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.021947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.022270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.022277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.022595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.022602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.022890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.022898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.023210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.023218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.023508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.023516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.023832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.023839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.024136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.024144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.024447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.024454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.024771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.024778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.025078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.025085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.025352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.025359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.025566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.025573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.025878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.025885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.026175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.026183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.026504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.026511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.773 [2024-11-19 11:25:38.026808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.773 [2024-11-19 11:25:38.026815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.773 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.027015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.027022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.027202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.027209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.027541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.027548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.027869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.027876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.028193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.028201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.028389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.028396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.028738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.028745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.029031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.029039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.029235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.029243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.029559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.029566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.029727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.029735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.029934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.029942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.030266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.030273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.030584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.030591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.030773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.030781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.030976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.030983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.031261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.031268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.031328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.031336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.031603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.031610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.031923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.031930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.032220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.032227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.032588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.032596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.032909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.032916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.033227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.033234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.033542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.033549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.033739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.033746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.034049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.034056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.034227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.034235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.774 [2024-11-19 11:25:38.034449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.774 [2024-11-19 11:25:38.034455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.774 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.034750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.034758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.035028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.035035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.035247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.035254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.035560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.035567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.035853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.035861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.036181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.036187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.036502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.036508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.036704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.036711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.037018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.037026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.037197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.037204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.037560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.037566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.037854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.037863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.038187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.038194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.038561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.038568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.038834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.038842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.039106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.039113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.039339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.039346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.039611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.039619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.039900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.039908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.040205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.040213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.040527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.040534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.040878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.040886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.041068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.041076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.041250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.041258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.041572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.041578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.041856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.041867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.042158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.042165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.042356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.042363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.042596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.042603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.042888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.042895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.043192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.043199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.043414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.043421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.043689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.043696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.043765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.043771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.043947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.043955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.044259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.044266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.044551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.044557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.775 qpair failed and we were unable to recover it. 00:31:29.775 [2024-11-19 11:25:38.044878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.775 [2024-11-19 11:25:38.044885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.045216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.045226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.045542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.045549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.045738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.045745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.046133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.046141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.046345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.046352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.046707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.046714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.047029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.047037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.047236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.047244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.047561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.047568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.047858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.047868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.048155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.048162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.048475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.048482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.048791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.048798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.049088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.049095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.049336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.049343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.049649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.049657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.049979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.049987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.050298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.050306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.050593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.050601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.050914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.050921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.051227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.051234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.051558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.051565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.051885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.051893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.052207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.052214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.052522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.052529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.052871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.052878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.053163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.053170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.053531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.053538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.053836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.053843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.054176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.054184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.054508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.054515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.054825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.054832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.055046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.055053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.055372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.055379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.055733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.055740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.056049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.056057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.056231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.776 [2024-11-19 11:25:38.056239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.776 qpair failed and we were unable to recover it. 00:31:29.776 [2024-11-19 11:25:38.056415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.777 [2024-11-19 11:25:38.056422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.777 qpair failed and we were unable to recover it. 00:31:29.777 [2024-11-19 11:25:38.056719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.777 [2024-11-19 11:25:38.056727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.777 qpair failed and we were unable to recover it. 00:31:29.777 [2024-11-19 11:25:38.057006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.777 [2024-11-19 11:25:38.057013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.777 qpair failed and we were unable to recover it. 00:31:29.777 [2024-11-19 11:25:38.057316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.777 [2024-11-19 11:25:38.057325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.777 qpair failed and we were unable to recover it. 00:31:29.777 [2024-11-19 11:25:38.057391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.777 [2024-11-19 11:25:38.057398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.777 qpair failed and we were unable to recover it. 00:31:29.777 [2024-11-19 11:25:38.057701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.777 [2024-11-19 11:25:38.057708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.777 qpair failed and we were unable to recover it. 00:31:29.777 [2024-11-19 11:25:38.057882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.777 [2024-11-19 11:25:38.057889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:29.777 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.058362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.058370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.058733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.058740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.059035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.059042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.059254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.059261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.059576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.059583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.059914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.059921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.060290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.060298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.060609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.060616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.060915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.060922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.061130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.061136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.061361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.061368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.061697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.061704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.062041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.062048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.062333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.062340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.062733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.062740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.063030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.063038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.063357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.063364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.063658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.063665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.063992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.064000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.064311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.064319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.064516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.064523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.064816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.064823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.065137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.065144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.065458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.065466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.065836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.065843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.066149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.066157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.066488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.066495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.066679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.066686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.067053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.067060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.067365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.067373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.067529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.067536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.067882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.067890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.056 [2024-11-19 11:25:38.068066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.056 [2024-11-19 11:25:38.068074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.056 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.068260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.068267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.068556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.068563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.068726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.068734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.069070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.069080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.069283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.069290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.069584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.069592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.069781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.069787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.070057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.070064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.070392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.070398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.070694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.070701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.071033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.071040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.071362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.071369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.071679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.071687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.071983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.071989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.072302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.072309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.072490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.072497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.072705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.072712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.073048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.073056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.073361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.073368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.073687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.073695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.073891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.073899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.074186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.074193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.074534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.074540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.074845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.074852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.075159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.075166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.075465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.075472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.075809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.075815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.076142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.076154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.076463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.076470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.076760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.076767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.076939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.076946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.077231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.077238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.077625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.077632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.077974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.077982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.078290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.078297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.078607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.078614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.057 [2024-11-19 11:25:38.078924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.057 [2024-11-19 11:25:38.078931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.057 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.079240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.079248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.079598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.079606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.079793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.079800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.080147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.080155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.080460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.080467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.080785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.080791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.081171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.081179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.081467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.081475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.081789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.081795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.081965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.081972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.082218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.082233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.082542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.082550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.082723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.082731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.083061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.083068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.083418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.083427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.083757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.083764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.084090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.084098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.084432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.084439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.084603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.084609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.084897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.084904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.085160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.085167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.085345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.085352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.085698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.085705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.085876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.085883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.086074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.086081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.086412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.086419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.086590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.086597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.086940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.086948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.087253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.087260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.087591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.087597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.087993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.088000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.088295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.088303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.088624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.088631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.088816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.088823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.089163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.089171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.089329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.089337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.089660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.058 [2024-11-19 11:25:38.089666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.058 qpair failed and we were unable to recover it. 00:31:30.058 [2024-11-19 11:25:38.090038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.090046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.090363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.090370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.090689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.090696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.090999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.091006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.091311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.091325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.091643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.091649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.091954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.091961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.092269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.092276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.092600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.092606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.092921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.092929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.093256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.093263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.093553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.093560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.093768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.093775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.093975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.093981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.094286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.094293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.094596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.094603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.094796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.094803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.095099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.095106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.095444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.095451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.095765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.095772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 [2024-11-19 11:25:38.095938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.059 [2024-11-19 11:25:38.095947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.059 qpair failed and we were unable to recover it. 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 [2024-11-19 11:25:38.096669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:30.059 [2024-11-19 11:25:38.096847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1b020 is same with the state(6) to be set 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Write completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.059 starting I/O failed 00:31:30.059 Read completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Read completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Write completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Read completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Write completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Read completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Write completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Write completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Read completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Write completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Read completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Read completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Read completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 Write completed with error (sct=0, sc=8) 00:31:30.060 starting I/O failed 00:31:30.060 [2024-11-19 11:25:38.097257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:30.060 [2024-11-19 11:25:38.097581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.097590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.097763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.097771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.097948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.097955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.098247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.098254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.098443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.098450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.098714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.098721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.099018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.099025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.099308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.099315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.099638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.099645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.099916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.099923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.100236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.100243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.100554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.100561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.100875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.100882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.101206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.101213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.101496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.101505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.101813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.101821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.102144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.102152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.102466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.102473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.102823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.102830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.103114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.103121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.103465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.103472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.103785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.103792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.104161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.104169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.104449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.104456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.104790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.104797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.060 qpair failed and we were unable to recover it. 00:31:30.060 [2024-11-19 11:25:38.105156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.060 [2024-11-19 11:25:38.105163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.105471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.105477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.105788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.105795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.106103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.106111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.106422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.106429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.106744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.106752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.106961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.106968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.107233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.107239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.107526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.107534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.107723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.107730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.107996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.108003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.108374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.108381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.108709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.108715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.109026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.109033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.109338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.109344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.109513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.109521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.109743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.109750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.110062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.110069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.110276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.110283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.110548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.110561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.110868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.111253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.111260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.111572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.111579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.111874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.111882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.112197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.112204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.112511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.112517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.112871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.112878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.113165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.113172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.113490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.113497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.113869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.113879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.113961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.113968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.114299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.114306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.114578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.114585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.114791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.114797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.115085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.115092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.115417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.115424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.115748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.115755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.116039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.061 [2024-11-19 11:25:38.116047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.061 qpair failed and we were unable to recover it. 00:31:30.061 [2024-11-19 11:25:38.116335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.116343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.116627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.116634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.116841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.116848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.117016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.117023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.117333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.117340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.117628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.117636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.117921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.117928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.118238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.118245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.118551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.118558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.118867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.118874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.119172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.119179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.119389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.119396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.119706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.119713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.120002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.120009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.120318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.120325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.120632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.120639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.121005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.121013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.121326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.121333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.121642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.121650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.121980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.121987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.122299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.122308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.122622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.122629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.123384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.123400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.123671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.123680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.123996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.124003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.124326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.124333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.124608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.124615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.124925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.124933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.125258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.125265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.125577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.125584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.125755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.125763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.126081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.126090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.126262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.126270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.126470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.126477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.126795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.126801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.127107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.127114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.127400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.127407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.062 [2024-11-19 11:25:38.127717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.062 [2024-11-19 11:25:38.127724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.062 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.128031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.128038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.128322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.128329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.128637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.128644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.128955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.128962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.129281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.129288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.129582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.129590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.129907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.129915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.130283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.130290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.130611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.130618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.130965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.130972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.131272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.131279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.131600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.131606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.131909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.131917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.132214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.132227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.132543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.132549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.132866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.132873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.133186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.133192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.133471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.133478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.133792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.133798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.134000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.134008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.134338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.134344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.134644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.134650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.134861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.134870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.135220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.135228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.135543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.135549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.135794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.135801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.136114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.136122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.136458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.136465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.136776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.136782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.137068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.137075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.137388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.137395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.137682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.137695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.137999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.138006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.138271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.138279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.138585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.138592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.138910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.138917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.063 qpair failed and we were unable to recover it. 00:31:30.063 [2024-11-19 11:25:38.139245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.063 [2024-11-19 11:25:38.139252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.139544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.139551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.139870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.139878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.140213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.140220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.140525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.140532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.140840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.140847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.141011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.141019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.141303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.141310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.141593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.141599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.141881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.141888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.142161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.142168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.142461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.142468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.142797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.142804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.143010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.143017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.143372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.143379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.143563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.143570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.143806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.143821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.144102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.144110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.144290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.144298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.144604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.144612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.144934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.144942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.145155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.145163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.145473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.145480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.145790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.145798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.146107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.146115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.146407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.146415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.146720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.146728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.146881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.146890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.147226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.147235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.147385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.147392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.147489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.147496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.147750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.147757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.147918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.147926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.148707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.148724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.149009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.149018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.149334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.149342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.149545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.149552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.149857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.064 [2024-11-19 11:25:38.149870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.064 qpair failed and we were unable to recover it. 00:31:30.064 [2024-11-19 11:25:38.150178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.150185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.150373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.150380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.150663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.150670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.151035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.151042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.151342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.151355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.151643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.151649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.151853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.151860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.152060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.152068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.152393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.152400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.152727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.152734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.153044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.153051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.153352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.153359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.153674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.153681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.153981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.153989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.154299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.154306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.154624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.154631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.154846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.154852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.155147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.155154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.155479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.155486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.155812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.155820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.156102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.156109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.156442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.156450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.156757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.156765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.157059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.157066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.157348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.157354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.157689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.157697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.158013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.158020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.158211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.158219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.065 qpair failed and we were unable to recover it. 00:31:30.065 [2024-11-19 11:25:38.158376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.065 [2024-11-19 11:25:38.158384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.158568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.158575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.158864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.158871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.159074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.159081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.159409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.159416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.159732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.159739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.160037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.160045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.160220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.160228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.160531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.160538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.160872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.160880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.161169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.161176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.161488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.161496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.161788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.161795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.162132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.162139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.162451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.162459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.162796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.162803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.163116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.163123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.163423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.163430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.163729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.163736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.163896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.163904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.164115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.164122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.164353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.164361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.164658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.164666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.164995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.165003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.165316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.165324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.165532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.165538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.165866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.165873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.166161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.166168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.166475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.166482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.166759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.166766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.166958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.166966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.167236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.167244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.167550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.167557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.167866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.167873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.168145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.168152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.168434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.168441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.168767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.168775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.066 [2024-11-19 11:25:38.169083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.066 [2024-11-19 11:25:38.169092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.066 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.169400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.169408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.169708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.169715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.169922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.169930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.170111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.170118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.170440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.170447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.170726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.170732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.171069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.171077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.171370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.171377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.171691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.171697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.172010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.172017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.172325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.172333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.172641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.172648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.172959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.172966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.173140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.173149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.173415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.173421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.173749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.173757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.174122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.174129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.174432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.174439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.174745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.174752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.175037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.175043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.175361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.175368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.175551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.175558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.175754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.175761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.175923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.175931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.176206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.176213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.176507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.176515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.176826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.176833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.177021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.177027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.177436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.177444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.177645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.177653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.177959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.177967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.178267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.178275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.178585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.178591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.178888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.178895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.179212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.179218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.179423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.179430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.067 [2024-11-19 11:25:38.179754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.067 [2024-11-19 11:25:38.179761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.067 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.179987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.179995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.180208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.180214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.180529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.180536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.180898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.180905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.181107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.181114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.181412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.181419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.181724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.181731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.182029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.182037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.182237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.182244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.182553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.182560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.182888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.182895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.183205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.183212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.183520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.183527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.183838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.183846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.184151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.184158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.184449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.184456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.184705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.184713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.185030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.185037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.185320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.185327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.185607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.185615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.185927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.185934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.186213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.186221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.186538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.186545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.186824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.186831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.187146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.187153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.187464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.187470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.187777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.187783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.188095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.188102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.188417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.188424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.188731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.188738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.189070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.189077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.189388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.189396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.189700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.189707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.189991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.189999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.190308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.190315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.190632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.190640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.190938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.190945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.068 [2024-11-19 11:25:38.191245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.068 [2024-11-19 11:25:38.191253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.068 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.191442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.191449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.191772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.191779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.191941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.191948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.192265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.192272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.192567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.192574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.192888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.192896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.193204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.193212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.193517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.193524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.193833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.193840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.194143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.194150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.194439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.194446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.194760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.194768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.195095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.195103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.195437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.195445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.195777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.195785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.196075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.196083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.196417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.196425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.196756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.196763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.197076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.197085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.197274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.197281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.197572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.197580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.197888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.197896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.198200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.198206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.198542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.198548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.198741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.198748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.199109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.199116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.199431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.199438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.199757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.199764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.200075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.200082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.200262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.200269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.200554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.200560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.200880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.200888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.201202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.201210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.201519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.201525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.201822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.201829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.202008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.202016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.202338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.069 [2024-11-19 11:25:38.202345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.069 qpair failed and we were unable to recover it. 00:31:30.069 [2024-11-19 11:25:38.202670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.202677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.202987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.202994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.203316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.203323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.203626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.203633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.203943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.203951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.204259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.204267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.204592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.204599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.204912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.204919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.205234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.205242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.205418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.205426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.205605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.205612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.205831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.205837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.206173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.206180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.206492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.206499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.206730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.206736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.206988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.206996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.207301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.207308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.207618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.207625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.207919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.207935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.208247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.208254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.208447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.208453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.208736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.208745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.209021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.209028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.209329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.209338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.209641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.209648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.209945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.209952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.210158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.210165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.210320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.210327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.210639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.210646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.210963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.210970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.211203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.211210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.211512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.211519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.211817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.070 [2024-11-19 11:25:38.211825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.070 qpair failed and we were unable to recover it. 00:31:30.070 [2024-11-19 11:25:38.212104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.212111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.212444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.212452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.212758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.212764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.213078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.213085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.213286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.213294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.213478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.213485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.213783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.213789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.214078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.214085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.214410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.214417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.214621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.214628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.214931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.214938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.215249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.215257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.215577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.215584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.215870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.215878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.216062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.216070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.216394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.216401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.216595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.216603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.216920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.216927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.217213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.217220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.217533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.217540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.217850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.217858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.218152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.218159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.218473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.218480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.218778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.218785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.219081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.219089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.219394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.219402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.219713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.219720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.219891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.219900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.220197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.220206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.220523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.220530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.220854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.220860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.221142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.221149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.221467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.221473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.221760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.221767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.221962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.221969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.222291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.222298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.222628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.222634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.222938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.071 [2024-11-19 11:25:38.222945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.071 qpair failed and we were unable to recover it. 00:31:30.071 [2024-11-19 11:25:38.223269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.223276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.223568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.223574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.223959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.223966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.224171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.224178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.224336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.224343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.224616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.224622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.224810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.224816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.225120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.225127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.225442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.225449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.225769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.225777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.226094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.226101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.226397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.226405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.226567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.226575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.226853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.226860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.227152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.227158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.227484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.227491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.227782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.227790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.227986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.227994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.228394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.228401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.228573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.228581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.228868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.228876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.229100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.229107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.229383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.229389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.229711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.229718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.230006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.230013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.230343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.230350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.230664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.230671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.230950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.230957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.231265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.231272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.231535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.231542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.231823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.231832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.232159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.232166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.232489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.232503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.232808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.232814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.233019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.233027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.233192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.233200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.233462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.233468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.072 [2024-11-19 11:25:38.233765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.072 [2024-11-19 11:25:38.233773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.072 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.234090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.234096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.234405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.234412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.234516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.234524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.234800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.234806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.235180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.235187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.235395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.235402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.235740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.235746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.236022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.236029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.236352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.236359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.236668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.236675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.237000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.237008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.237309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.237316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.237490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.237497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.237793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.237800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.238185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.238192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.238490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.238497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.238770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.238777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.239090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.239097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.239408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.239415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.239794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.239800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.240079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.240086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.240405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.240412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.240591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.240598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.240967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.240974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.241278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.241285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.241606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.241612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.241903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.241910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.242211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.242218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.242525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.242532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.242829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.242836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.243128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.243135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.243444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.243452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.243761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.243769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.244082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.244089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.244409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.244416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.244745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.244753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.244963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.244971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.073 qpair failed and we were unable to recover it. 00:31:30.073 [2024-11-19 11:25:38.245143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.073 [2024-11-19 11:25:38.245150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.245447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.245454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.245750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.245758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.246076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.246084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.246375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.246383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.246691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.246699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.246894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.246902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.247198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.247205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.247489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.247497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.247803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.247809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.248117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.248124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.248310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.248317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.248637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.248643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.248937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.248945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.249021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.249029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.249341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.249348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.249638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.249645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.249955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.249962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.250275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.250281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.250580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.250587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.250898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.250905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.251217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.251224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.251510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.251519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.251823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.251830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.252153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.252161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.252470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.252476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.252796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.252803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.253114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.253121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.253384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.253391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.253705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.253713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.254034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.254042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.254355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.254363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.254671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.074 [2024-11-19 11:25:38.254679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.074 qpair failed and we were unable to recover it. 00:31:30.074 [2024-11-19 11:25:38.254763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.254770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.254958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.254966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.255262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.255268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.255584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.255591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.255893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.255901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.256241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.256249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.256558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.256565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.257572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.257589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.257872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.257881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.258180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.258187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.258478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.258485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.258687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.258693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.259030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.259037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.259331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.259337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.259638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.259646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.259959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.259967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.260277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.260284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.260591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.260598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.260807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.260814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.261127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.261134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.261330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.261337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.261537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.261545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.261834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.261842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.262154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.262161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.262472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.262478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.262772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.262780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.263087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.263094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.263401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.263408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.263698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.263706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.264042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.264051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.264357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.264364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.264658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.264664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.264836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.264843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.265183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.265191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.265522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.265529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.265845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.265852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.266166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.266173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.266478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.075 [2024-11-19 11:25:38.266485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.075 qpair failed and we were unable to recover it. 00:31:30.075 [2024-11-19 11:25:38.266793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.266800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.267114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.267120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.267412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.267418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.267605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.267612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.267938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.267945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.268240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.268248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.268457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.268464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.268777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.268784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.269158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.269164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.269460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.269467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.269710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.269717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.270061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.270069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.270395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.270402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.270721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.270728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.271056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.271063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.271365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.271371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.271678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.271685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.271982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.271989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.272314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.272322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.272634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.272641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.273007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.273015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.273322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.273328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.273635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.273643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.273932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.273940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.274111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.274119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.274464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.274470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.274776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.274784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.275119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.275126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.275284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.275291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.275564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.275571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.275887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.275894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.276203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.276212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.276507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.276514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.276831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.276839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.277143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.277150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.277351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.277357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.277683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.076 [2024-11-19 11:25:38.277690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.076 qpair failed and we were unable to recover it. 00:31:30.076 [2024-11-19 11:25:38.277906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.277913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.278272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.278279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.278606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.278613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.278803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.278811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.279105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.279112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.279452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.279459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.279526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.279533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.279710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.279717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.279986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.279994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.280214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.280220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.280540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.280547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.280860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.280871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.281247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.281254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.281606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.281614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.281816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.281823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.282123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.282130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.282430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.282436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.282749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.282756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.282956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.282963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.283278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.283286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.283617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.283626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.283939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.283947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.284264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.284271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.284563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.284570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.284882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.284890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.285177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.285184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.285379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.285386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.285698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.285705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.286023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.286031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.286369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.286376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.286692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.286699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.287000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.287008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.287317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.287325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.287692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.287698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.288038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.288047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.288370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.288377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.288583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.288590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.077 [2024-11-19 11:25:38.288774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.077 [2024-11-19 11:25:38.288781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.077 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.289081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.289088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.289403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.289410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.289599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.289606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.289882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.289889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.290198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.290205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.290536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.290543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.290854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.290860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.291210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.291217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.291526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.291532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.291853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.291860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.292067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.292074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.292421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.292429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.292744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.292751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.292968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.292976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.293318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.293326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.293630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.293637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.293922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.293930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.294264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.294271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.294580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.294587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.294891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.294898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.295222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.295228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.295534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.295541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.295871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.295879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.296199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.296206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.296593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.296599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.296848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.296855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.297181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.297188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.297498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.297505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.297798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.297805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.298007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.298015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.298233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.298240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.298521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.298528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.298822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.298829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.299128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.299135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.299445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.299453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.299641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.299648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.299802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.078 [2024-11-19 11:25:38.299811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.078 qpair failed and we were unable to recover it. 00:31:30.078 [2024-11-19 11:25:38.300123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.300131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.300327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.300334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.300636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.300643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.300917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.300924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.301219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.301226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.301423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.301430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.301721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.301728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.302033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.302040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.302244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.302251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.302570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.302577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.302763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.302770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.302954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.302961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.303226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.303233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.303614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.303621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.303832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.303838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.304176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.304183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.304592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.304599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.304786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.304794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.305073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.305081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.305294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.305301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.305593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.305601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.305923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.305931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.306124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.306131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.306443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.306451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.306779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.306786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.307102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.307109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.307479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.307486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.307657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.307664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.307845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.307852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.308132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.308139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.308357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.308364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.308647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.308654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.308843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.308850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.079 [2024-11-19 11:25:38.309040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.079 [2024-11-19 11:25:38.309047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.079 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.309313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.309321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.309688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.309695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.310006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.310013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.310178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.310185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.310469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.310476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.310676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.310684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.310990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.310998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.311275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.311282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.311582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.311590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.311911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.311918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.312237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.312244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.312637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.312644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.312947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.312954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.313151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.313158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.313468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.313476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.313682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.313690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.313993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.314000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.314317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.314324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.314413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.314420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.314600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.314608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.314935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.314943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.315124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.315131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.315308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.315316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.315626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.315633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.315925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.315933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.316256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.316263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.316574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.316581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.316892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.316899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.317281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.317288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.317596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.317603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.317920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.317928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.318218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.318226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.318424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.318431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.318707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.318714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.319054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.319060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.319351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.319359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.319688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.319695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.080 qpair failed and we were unable to recover it. 00:31:30.080 [2024-11-19 11:25:38.319984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.080 [2024-11-19 11:25:38.319991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.320314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.320321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.320657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.320664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.321007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.321014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.321332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.321341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.321661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.321669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.321966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.321973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.322289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.322296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.322469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.322477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.322796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.322803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.323122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.323129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.323424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.323431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.323621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.323628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.323960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.323967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.324292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.324298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.324453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.324460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.324728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.324736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.325147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.325153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.325522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.325528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.325854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.325868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.326148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.326155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.326471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.326478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.326797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.326804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.326845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.326852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.327127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.327134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.327345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.327353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.327665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.327671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.327973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.327981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.328281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.328288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.328586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.328594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.328909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.328916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.329245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.329252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.329552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.329559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.329852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.329860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.330187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.330194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.330514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.330521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.330603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.330610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.330924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.081 [2024-11-19 11:25:38.330931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.081 qpair failed and we were unable to recover it. 00:31:30.081 [2024-11-19 11:25:38.331214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.331221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.331444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.331451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.331671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.331678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.331997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.332005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.332088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.332095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.332382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.332389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.332673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.332681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.333013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.333020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.333293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.333300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.333372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.333378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.333664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.333672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.333991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.333999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.334316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.334323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.334613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.334621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.334895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.334902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.335203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.335210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.335525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.335532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.335924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.335932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.336232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.336239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.336554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.336561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.336727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.336735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.336921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.336929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.337207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.337214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.337524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.337531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.337833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.337841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.338049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.338056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.338394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.338402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.338680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.338686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.339010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.339018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.339184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.339191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.339501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.339509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.339780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.339788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.339839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.339847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.340130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.340137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.340463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.340471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.340663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.340670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.340930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.340937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.082 qpair failed and we were unable to recover it. 00:31:30.082 [2024-11-19 11:25:38.341250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.082 [2024-11-19 11:25:38.341257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.341540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.341547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.341894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.341902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.342196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.342202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.342389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.342396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.342586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.342593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.342894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.342901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.343338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.343346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.343534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.343541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.343866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.343874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.344165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.344172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.344345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.344352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.344652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.344659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.344838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.344848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.345151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.345158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.345310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.345317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.345600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.345607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.345969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.345976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.346331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.346337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.346658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.346665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.346998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.347006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.347328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.347335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.347649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.347655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.348022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.348028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.348189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.348196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.348357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.348365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.348654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.348661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.348980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.348987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.349317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.349324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.349617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.349624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.349936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.349943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.350186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.350193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.350376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.350391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.350663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.350670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.350977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.350985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.083 [2024-11-19 11:25:38.351314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.083 [2024-11-19 11:25:38.351321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.083 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.351638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.351645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.351965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.351972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.352282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.352289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.352614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.352621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.352940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.352948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.353243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.353250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.353291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.353297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.353712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.353720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.354031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.354039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.354358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.354365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.354538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.354546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.354827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.354835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.355117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.355124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.355446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.355453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.355768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.355775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.356097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.356104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.356311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.356318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.356654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.356663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.356959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.356966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.357299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.357306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.357587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.357594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.357765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.357773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.357955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.357962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.358266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.358273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.358566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.358573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.358800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.358807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.359076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.359083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.359468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.359475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.359727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.359733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.359990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.359997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.360308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.360315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.360626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.360633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.360949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.360957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.361288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.361295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.361602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.084 [2024-11-19 11:25:38.361610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.084 qpair failed and we were unable to recover it. 00:31:30.084 [2024-11-19 11:25:38.361777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.361784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.362072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.362079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.362404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.362410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.362610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.362617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.362905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.362913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.363235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.363242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.363558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.363565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.363749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.363757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.363925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.363932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.364225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.364232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.364423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.364429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.364754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.364761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.365104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.365112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.365403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.365410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.365726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.365733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.366028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.366036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.366251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.366258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.366576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.366583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.366889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.366896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.367084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.367090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.367420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.367427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.367738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.367745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.368029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.368038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.368352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.368359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.368637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.368644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.368841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.368848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.369097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.369104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.369335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.369342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.369664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.369672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.369959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.369967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.370128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.370135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.370417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.370424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.370703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.370710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.371033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.371040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.371356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.371363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.371663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.371671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.371980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.371990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.372278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.085 [2024-11-19 11:25:38.372285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.085 qpair failed and we were unable to recover it. 00:31:30.085 [2024-11-19 11:25:38.372616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.372623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.372958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.372965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.373189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.373195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.373387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.373394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.373734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.373741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.374093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.374100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.374288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.374295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.374591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.374598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.374775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.374782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.374959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.374966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.375268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.375275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.375488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.375495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.375780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.375786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.375974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.375981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.376149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.376156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.376480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.376487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.376787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.376794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.377106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.377113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.377449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.377456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.377782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.377789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.378166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.378173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.378373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.378380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.378638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.378645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.378813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.378820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.379115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.379125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.379494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.379500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.379663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.379671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.379981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.379988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.380263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.380270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.380584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.380590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.380756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.380764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.381044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.381051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.381352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.381359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.381636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.381644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.381892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.381899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.382188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.382195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.382475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.086 [2024-11-19 11:25:38.382482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.086 qpair failed and we were unable to recover it. 00:31:30.086 [2024-11-19 11:25:38.382619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.382626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.382912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.383205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.383212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.383604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.383611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.383920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.383927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.384231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.384239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.384538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.384546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.384880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.384887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.385171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.385177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.385497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.385503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.385799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.385805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.386023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.386030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.386402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.386409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.386619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.386625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.386908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.386915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.387226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.387233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.387415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.387422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.387745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.387752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.388060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.388067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.388379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.388386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.388679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.388685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.388894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.388901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.389247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.389254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.389334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.389341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.087 [2024-11-19 11:25:38.389524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.087 [2024-11-19 11:25:38.389531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.087 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.389833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.367 [2024-11-19 11:25:38.389841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.367 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.390039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.367 [2024-11-19 11:25:38.390047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.367 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.390268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.367 [2024-11-19 11:25:38.390275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.367 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.390592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.367 [2024-11-19 11:25:38.390598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.367 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.390755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.367 [2024-11-19 11:25:38.390763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.367 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.390995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.367 [2024-11-19 11:25:38.391003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.367 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.391280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.367 [2024-11-19 11:25:38.391286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.367 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.391595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.367 [2024-11-19 11:25:38.391602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.367 qpair failed and we were unable to recover it. 00:31:30.367 [2024-11-19 11:25:38.391900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.391907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.392271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.392278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.392700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.392706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.392986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.392993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.393225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.393233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.393543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.393551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.393723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.393731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.394047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.394054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.394355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.394368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.394673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.394679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.394897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.394904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.395269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.395276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.395465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.395472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.395808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.395814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.396129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.396136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.396527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.396534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.396816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.396823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.396942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.396948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.397236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.397243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.397537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.397544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.397868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.397875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.398230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.398238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.398547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.398554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.398866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.398873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.399229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.399236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.399473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.399479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.399703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.399709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.400016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.400024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.400314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.400322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.400628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.400635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.400941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.400948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.401282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.401289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.401580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.401588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.401897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.401904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.402253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.402260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.402551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.402558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.402903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.402910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.368 [2024-11-19 11:25:38.403197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.368 [2024-11-19 11:25:38.403204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.368 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.403407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.403413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.403714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.403721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.404034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.404417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.404424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.404787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.404793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.405151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.405158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.405480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.405487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.405791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.405798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.405968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.405976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.406311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.406318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.406618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.406625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.406798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.406806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.407113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.407121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.407517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.407524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.407836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.407843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.408072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.408080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.408363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.408369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.408530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.408538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.408826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.408834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.409234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.409241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.409525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.409532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.409871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.409878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.410177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.410184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.410387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.410396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.410691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.410697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.410986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.410993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.411274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.411281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.411603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.411609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.411948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.411956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.412111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.412118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.412444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.412450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.412752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.412759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.412826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.412833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.413130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.413137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.413351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.413358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.413714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.413721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.413817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.413823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.369 qpair failed and we were unable to recover it. 00:31:30.369 [2024-11-19 11:25:38.414164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.369 [2024-11-19 11:25:38.414171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.414476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.414482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.414798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.414805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.415151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.415158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.415465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.415472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.415790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.415797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.416155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.416162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.416346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.416353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.416639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.416646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.416943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.416950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.417264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.417271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.417511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.417517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.417810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.417817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.418142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.418150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.418448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.418455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.418766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.418773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.419083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.419091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.419392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.419399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.419679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.419686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.420001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.420008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.420253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.420260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.420568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.420575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.420884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.420891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.421195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.421202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.421392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.421399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.421758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.421765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.422123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.422132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.422468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.422476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.422772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.422778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.423058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.423066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.423372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.423378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.423689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.423696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.424023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.424031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.424342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.424349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.424651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.424658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.424936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.424943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.425243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.425250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.370 [2024-11-19 11:25:38.425556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.370 [2024-11-19 11:25:38.425563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.370 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.425840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.425846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.426149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.426157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.426517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.426525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.426815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.426822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.427137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.427145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.427458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.427465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.427764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.427770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.428085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.428092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.428403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.428409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.428721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.428728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.428918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.428926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.429119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.429125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.429291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.429298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.429383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.429390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.429672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.429679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.430069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.430076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.430350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.430356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.430677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.430684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.430994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.431001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.431305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.431312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.431621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.431627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.431932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.431939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.432273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.432279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.432566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.432574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.432879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.432886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.433273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.433280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.433584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.433590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.433917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.433925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.434237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.434245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.434550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.434556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.434768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.434774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.435126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.435133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.435446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.435453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.435658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.435665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.435942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.435949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.436250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.436258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.436536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.371 [2024-11-19 11:25:38.436542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.371 qpair failed and we were unable to recover it. 00:31:30.371 [2024-11-19 11:25:38.436877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.436884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.437185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.437192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.437512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.437518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.437879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.437886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.438185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.438192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.438511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.438518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.438910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.438917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.439286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.439293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.439602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.439608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.439901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.439909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.440236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.440243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.440568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.440575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.440936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.440943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.441264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.441271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.441577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.441584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.441796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.441802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.442100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.442107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.442407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.442414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.442724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.442730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.443046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.443053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.443376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.443383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.443667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.443674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.443872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.443879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.444186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.444193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.444365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.444373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.444677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.444684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.444992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.444999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.445360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.445366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.445684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.445691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.446002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.446009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.446310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.446325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.372 [2024-11-19 11:25:38.446614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.372 [2024-11-19 11:25:38.446623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.372 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.446930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.446937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.447235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.447242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.447463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.447469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.447790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.447797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.448108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.448115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.448424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.448431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.448697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.448704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.449008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.449015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.449171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.449178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.449519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.449525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.449832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.449838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.450024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.450031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.450355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.450361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.450649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.450657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.451012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.451019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.451297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.451305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.451624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.451630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.451940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.451948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.452253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.452260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.452547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.452554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.452760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.452766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.453150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.453157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.453451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.453459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.453771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.453778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.454081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.454088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.454293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.454300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.454561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.454568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.454893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.454900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.455216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.455222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.455508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.455515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.455711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.455718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.456045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.456052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.456338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.456345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.456651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.456657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.456956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.456963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.457177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.457184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.457515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.457521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.373 qpair failed and we were unable to recover it. 00:31:30.373 [2024-11-19 11:25:38.457737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.373 [2024-11-19 11:25:38.457744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.458026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.458034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.458365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.458374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.458665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.458672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.458984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.458991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.459283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.459290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.459617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.459624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.459934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.459941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.460258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.460264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.460553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.460560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.460725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.460733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.461000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.461008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.461208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.461214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.461519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.461525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.461793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.461800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.462108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.462116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.462429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.462436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.462615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.462621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.462914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.462922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.463082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.463090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.463435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.463442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.463745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.463752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.464033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.464040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.464324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.464331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.464638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.464644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.464804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.464812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.465019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.465026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.465189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.465196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.465470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.465477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.465657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.465665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.465893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.465900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.466248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.466255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.466569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.466576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.466882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.466889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.467079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.467086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.467391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.467398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.467581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.467589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.467860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.374 [2024-11-19 11:25:38.467873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.374 qpair failed and we were unable to recover it. 00:31:30.374 [2024-11-19 11:25:38.468180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.468187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.468472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.468479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.468803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.468810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.469095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.469102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.469463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.469472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.469780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.469787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.470110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.470117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.470287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.470294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.470611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.470618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.470926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.470933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.471224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.471231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.471410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.471418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.471793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.471800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.472172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.472179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.472464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.472472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.472782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.472789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.472985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.472992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.473254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.473266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.473566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.473573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.473858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.473868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.474071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.474079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.474292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.474299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.474606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.474613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.474919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.474926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.475251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.475258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.475582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.475588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.475783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.475790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.476144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.476151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.476454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.476461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.476775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.476781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.477095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.477102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.477426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.477432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.477732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.477739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.477937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.477944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.478276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.478282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.478593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.478599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.478885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.478893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.375 [2024-11-19 11:25:38.479204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.375 [2024-11-19 11:25:38.479210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.375 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.479517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.479524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.479837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.479843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.480015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.480023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.480310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.480316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.480643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.480650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.481026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.481033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.481340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.481349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.481664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.481671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.481971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.481978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.482304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.482310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.482472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.482479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.482778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.482784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.483061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.483068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.483390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.483397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.483680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.483688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.483992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.483999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.484266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.484272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.484654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.484662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.484838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.484845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.485137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.485144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.485450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.485456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.485740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.485747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.486031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.486039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.486230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.486237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.486521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.486528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.486835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.486842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.487150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.487158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.487473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.487480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.487765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.487772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.487893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.487901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.488210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.488216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.488500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.488507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.488806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.488813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.489127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.489134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.376 qpair failed and we were unable to recover it. 00:31:30.376 [2024-11-19 11:25:38.489401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.376 [2024-11-19 11:25:38.489408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.489714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.489720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.489931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.489938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.490285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.490293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.490602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.490609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.490928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.490936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.491220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.491226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.491518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.491525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.491842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.491850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.492056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.492063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.492371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.492377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.492688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.492695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.493005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.493014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.493395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.493402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.493695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.493703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.493881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.493888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.494220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.494233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.494405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.494412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.494679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.494685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.494901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.494908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.495098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.495105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.495420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.495426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.495705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.495711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.495881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.495889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.496187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.496194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.496489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.496496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.496794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.496801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.497093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.497101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.497393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.497399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.497712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.497719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.497996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.498003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.498304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.498319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.498628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.498634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.498844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.498850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.499193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.499200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.499551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.499558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.377 [2024-11-19 11:25:38.499888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.377 [2024-11-19 11:25:38.499894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.377 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.500255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.500262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.500544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.500550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.500858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.500867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.501166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.501174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.501302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.501309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.501617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.501623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.501927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.501935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.502113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.502120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.502325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.502331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.502540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.502547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.502869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.502877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.503154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.503161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.503321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.503328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.503678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.503685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.503999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.504006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.504339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.504348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.504532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.504539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.504847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.504853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.505047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.505054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.505425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.505431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.505738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.505744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.506062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.506069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.506352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.506359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.506691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.506698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.506990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.506998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.507306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.507313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.507624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.507630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.507807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.507814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.508095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.508102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.508420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.508427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.508748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.508755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.509034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.509041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.509364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.509370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.509658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.509665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.509976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.509983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.510195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.510202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.510533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.510540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.378 qpair failed and we were unable to recover it. 00:31:30.378 [2024-11-19 11:25:38.510850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.378 [2024-11-19 11:25:38.510857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.511139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.511145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.511445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.511452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.511759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.511765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.511967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.511974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.512306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.512313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.512496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.512503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.512714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.512722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.512903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.512911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.513107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.513113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.513392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.513399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.513696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.513703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.514025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.514033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.514198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.514205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.514515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.514522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.514812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.514818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.515138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.515145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.515457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.515463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.515744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.515754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.516075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.516082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.516367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.516375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.516658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.516665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.516959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.516966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.517271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.517277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.517602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.517609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.517819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.517825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.518130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.518137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.518446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.518452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.518680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.518687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.518991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.518998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.519322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.519328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.519637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.519643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.519935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.519942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.520258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.520265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.520460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.520467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.520786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.520793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.521130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.521137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.379 [2024-11-19 11:25:38.521438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.379 [2024-11-19 11:25:38.521444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.379 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.521747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.521754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.521970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.521977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.522303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.522311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.522600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.522607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.522920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.522928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.523233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.523239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.523565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.523571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.523879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.523886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.524059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.524066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.524430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.524436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.524747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.525037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.525044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.525224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.525232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.525622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.525630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.525937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.525944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.526260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.526267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.526457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.526464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.526770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.526778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.526984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.526991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.527330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.527336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.527644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.527653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.527847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.527855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.528071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.528078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.528251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.528258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.528574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.528581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.528725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.528733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.529052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.529060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.529385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.529393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.529695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.529703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.529813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.529820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.530106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.530114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.530316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.530323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.530498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.530505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.530804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.530812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.531161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.531169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.531406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.531413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.531752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.531759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.380 qpair failed and we were unable to recover it. 00:31:30.380 [2024-11-19 11:25:38.532093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.380 [2024-11-19 11:25:38.532101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.532308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.532315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.532622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.532630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.532942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.532950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.533260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.533268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.533458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.533465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.533705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.533713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.534021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.534029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.534370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.534378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.534597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.534605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.534923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.534931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.535307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.535315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.535482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.535490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.535665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.535673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.535984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.535992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.536272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.536279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.536590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.536598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.536881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.536889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.537230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.537238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.537561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.537568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.537778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.537786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.538077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.538085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.538518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.538526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.538847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.538855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.539186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.539194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.539511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.539518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.539813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.539821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.539998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.540006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.540400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.540407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.540726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.540734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.541034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.541041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.381 [2024-11-19 11:25:38.541248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.381 [2024-11-19 11:25:38.541255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.381 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.541562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.541569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.541897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.541905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.542229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.542236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.542523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.542530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.542939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.542947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.543277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.543284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.543617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.543623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.543924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.543932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.544108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.544115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.544437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.544444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.544774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.544780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.545090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.545097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.545427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.545433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.545721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.545728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.546031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.546038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.546340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.546347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.546668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.546675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.546988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.546995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.547315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.547324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.547639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.547646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.547845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.547852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.548010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.548018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.548287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.548296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.548600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.548608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.548913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.548920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.549123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.549130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.549414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.549421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.549640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.549648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.549953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.549961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.550259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.550267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.550498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.550505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.550770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.550777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.551040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.551047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.551364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.551371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.551692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.551699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.551981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.551989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.552290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.552296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.382 qpair failed and we were unable to recover it. 00:31:30.382 [2024-11-19 11:25:38.552466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.382 [2024-11-19 11:25:38.552473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.552830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.552837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.553205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.553213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.553520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.553527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.553844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.553852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.554054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.554061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.554329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.554336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.554673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.554679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.554880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.554887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.555086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.555093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.555381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.555389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.555683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.555691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.555958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.555965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.556272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.556279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.556583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.556590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.556905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.556913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.557231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.557238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.557555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.557561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.557734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.557741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.557966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.557973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.558269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.558276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.558579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.558588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.558912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.558919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.559235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.559242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.559518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.559526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.559713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.559721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.560004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.560012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.560205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.560212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.560409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.560416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.560618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.560625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.560934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.560941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.561125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.561131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.561428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.561435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.561606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.561613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.561824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.561831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.562179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.562186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.562493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.562500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.562664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.383 [2024-11-19 11:25:38.562672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.383 qpair failed and we were unable to recover it. 00:31:30.383 [2024-11-19 11:25:38.563006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.563013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.563253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.563260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.563534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.563540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.563838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.563845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.564177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.564184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.564499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.564506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.564748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.564755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.565108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.565114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.565422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.565429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.565634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.565641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.565821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.565828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.566139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.566146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.566426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.566433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.566744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.566751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.567054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.567062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.567377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.567384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.567688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.567695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.568036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.568403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.568410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.568725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.568732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.569036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.569043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.569269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.569276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.569543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.569549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.569757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.569765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.570114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.570121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.570453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.570460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.570805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.570812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.571080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.571087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.571424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.571432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.571744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.571752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.572084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.572092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.572217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.572225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.572386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.572393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.572661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.572669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.572977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.572984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.573326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.573338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.573499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.384 [2024-11-19 11:25:38.573506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.384 qpair failed and we were unable to recover it. 00:31:30.384 [2024-11-19 11:25:38.573662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.573669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.573857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.573867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.574180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.574187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.574366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.574372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.574667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.574675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.574977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.574984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.575156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.575164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.575482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.575489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.575805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.575813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.576027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.576034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.576337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.576344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.576637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.576644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.576818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.576825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.577108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.577116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.577322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.577330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.577640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.577648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.577962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.577969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.578173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.578180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.578377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.578384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.578719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.578725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.578946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.578953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.579275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.579282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.579621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.579628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.580035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.580043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.580323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.580330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.580493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.580500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.580838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.580846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.581170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.581178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.581486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.581492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.581826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.581833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.582014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.582022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.582320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.582327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.582637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.582645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.583002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.583009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.583348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.583355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.385 [2024-11-19 11:25:38.583511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.385 [2024-11-19 11:25:38.583518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.385 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.583818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.583825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.584150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.584157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.584468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.584475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.584673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.584679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.585126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.585134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.585437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.585444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.585744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.585752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.586029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.586036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.586326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.586333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.586627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.586635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.586940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.586947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.587216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.587223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.587548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.587554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.587846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.587852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.588162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.588169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.588236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.588242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.588465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.588472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.588734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.588741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.588949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.588956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.589259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.589266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.589453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.589460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.589740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.589746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.590050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.590058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.590218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.590226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.590445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.590452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.590630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.590637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.590940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.590948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.591293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.386 [2024-11-19 11:25:38.591301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.386 qpair failed and we were unable to recover it. 00:31:30.386 [2024-11-19 11:25:38.591614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.591620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.591909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.591916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.592271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.592280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.592404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.592411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.592705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.592712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.593000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.593007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.593168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.593175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.593510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.593517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.593807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.593814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.594138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.594145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.594474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.594481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.594808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.594815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.595130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.595138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.595463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.595470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.595761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.595768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.595934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.595942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.596238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.596246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.596423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.596430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.596693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.596700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.596978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.596985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.597206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.597214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.597500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.597506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.597802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.597810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.598122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.598129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.598420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.598433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.598736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.598743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.599036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.599044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.599381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.599387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.599713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.599721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.600038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.600045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.600341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.600348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.600530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.600536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.600752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.600759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.601097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.601103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.601423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.601430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.601754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.601761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.602047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.602054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.387 [2024-11-19 11:25:38.602370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.387 [2024-11-19 11:25:38.602376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.387 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.602688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.602695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.603003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.603010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.603217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.603224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.603522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.603529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.603717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.603727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.604005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.604013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.604322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.604330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.604669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.604676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.604956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.604962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.605269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.605276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.605565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.605571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.605851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.605859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.606207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.606214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.606508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.606515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.606830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.606836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.607163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.607171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.607408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.607415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.607744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.607750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.607922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.607930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.608351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.608358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.608668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.608675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.608992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.608999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.609182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.609189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.609464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.609472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.609773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.609780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.610188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.610195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.610476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.610483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.610793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.610799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.611088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.611096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.611397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.611403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.611696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.611703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.612086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.612093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.612369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.612377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.612686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.612692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.612860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.612876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.613142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.613149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.613187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.388 [2024-11-19 11:25:38.613194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.388 qpair failed and we were unable to recover it. 00:31:30.388 [2024-11-19 11:25:38.613458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.613465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.613785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.613793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.614082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.614089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.614374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.614388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.614674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.614682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.614977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.614984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.615271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.615278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.615437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.615445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.615617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.615625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.616035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.616043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.616313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.616321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.616622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.616629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.616931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.616938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.617259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.617265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.617577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.617584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.617876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.617883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.618212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.618220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.618395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.618402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.618831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.618838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.619138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.619145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.619451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.619458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.619771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.619778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.620103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.620110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.620418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.620425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.620740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.620747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.620968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.620975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.621278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.621284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.621572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.621580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.621911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.621918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.622123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.622130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.622475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.622482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.622810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.622817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.623130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.623137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.623421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.623434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.623728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.623735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.623927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.623935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.624314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.624321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.624605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.389 [2024-11-19 11:25:38.624611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.389 qpair failed and we were unable to recover it. 00:31:30.389 [2024-11-19 11:25:38.624920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.624927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.625222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.625228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.625522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.625530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.625716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.625723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.625932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.625939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.626220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.626227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.626538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.626545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.626828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.626835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.627116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.627123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.627411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.627420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.627732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.627739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.627906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.627913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.628259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.628265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.628573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.628579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.628895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.628902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.629193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.629199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.629510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.629517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.629907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.629914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.630232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.630239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.630433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.630441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.630749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.630756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.631038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.631045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.631370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.631378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.631709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.631716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.632027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.632034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.632352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.632359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.632721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.632728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.633006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.633013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.633307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.633313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.633604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.633610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.633813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.633819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.634131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.634138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.634437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.634444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.634733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.634740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.634898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.634905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.635204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.635210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.390 qpair failed and we were unable to recover it. 00:31:30.390 [2024-11-19 11:25:38.635643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.390 [2024-11-19 11:25:38.635650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.635957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.635965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.636275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.636282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.636689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.636696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.636998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.637005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.637333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.637340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.637632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.637640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.637950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.637957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.638243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.638251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.638594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.638601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.638885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.638892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.639061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.639068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.639320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.639326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.639591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.639600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.639920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.639927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.640246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.640253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.640565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.640572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.640872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.640879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.641204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.641211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.641601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.641608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.641807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.641814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.642089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.642098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.642407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.642413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.642738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.642745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.642936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.642943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.643253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.643259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.643599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.643605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.643910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.643917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.644227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.644233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.644532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.644539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.644833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.644840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.645166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.645172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.645480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.645486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.645826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.645833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.391 qpair failed and we were unable to recover it. 00:31:30.391 [2024-11-19 11:25:38.646123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.391 [2024-11-19 11:25:38.646131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.646423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.646430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.646736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.646749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.647034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.647042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.647243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.647250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.647447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.647455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.647600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.647608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.647913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.647920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.648203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.648210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.648530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.648536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.648833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.648840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.649139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.649146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.649456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.649463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.649765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.649771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.650081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.650088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.650401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.650408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.650693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.650700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.651030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.651037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.651351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.651358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.651684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.651692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.651898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.651905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.652245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.652252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.652459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.652465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.652749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.652756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.652930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.652938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.653149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.653156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.653462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.653469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.653753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.653761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.654057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.654064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.654374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.654381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.654669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.654676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.654968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.655279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.655285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.655474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.655480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.655763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.655770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.656121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.656128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.656439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.656446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.392 [2024-11-19 11:25:38.656774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.392 [2024-11-19 11:25:38.656782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.392 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.657093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.657100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.657409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.657415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.657603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.657610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.657869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.657876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.658188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.658194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.658498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.658505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.658837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.658844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.659153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.659160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.659451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.659457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.659766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.659773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.660096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.660103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.660394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.660401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.660593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.660601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.660919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.660926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.661217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.661224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.661455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.661461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.661631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.661637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.661924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.661930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.662245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.662251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.662565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.662571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.662860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.662870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.663193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.663203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.663413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.663419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.663724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.663974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.664305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.664311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.664655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.664661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.664946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.664954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.665295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.665302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.665502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.665509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.665729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.665736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.665938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.665945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.666252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.666259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.666440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.666447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.666754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.666762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.667092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.667100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.667401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.667409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.393 qpair failed and we were unable to recover it. 00:31:30.393 [2024-11-19 11:25:38.667583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.393 [2024-11-19 11:25:38.667591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.667899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.667907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.668222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.668230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.668542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.668550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.668747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.668755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.669061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.669069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.669391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.669399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.669703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.669711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.670025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.670033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.670220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.670229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.670528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.670536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.670723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.670731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.671048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.671056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.671371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.671379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.671706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.671715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.672026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.672035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.672410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.672419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.672724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.672733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.672894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.672903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.673164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.673172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.673504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.673512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.673817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.673825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.674124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.674132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.674403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.674411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.674734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.674743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.675066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.675075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.675271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.675279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.675475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.675482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.675874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.675882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.676232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.676240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.676549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.676760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.676766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.677040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.677048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.677250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.677256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.677555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.677562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.677855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.677864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.678153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.678160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.678460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.394 [2024-11-19 11:25:38.678467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.394 qpair failed and we were unable to recover it. 00:31:30.394 [2024-11-19 11:25:38.678756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.678765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.679078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.679087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.679419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.679425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.679622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.679629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.679935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.679942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.680236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.680244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.680446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.680454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.680745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.680753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.680972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.680979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.681390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.681398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.681710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.681717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.682051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.682058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.682341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.682349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.682658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.682665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.682855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.682865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.683101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.683108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.683424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.683430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.683716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.683724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.684028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.684035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.684206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.684213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.684536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.684543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.684858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.684868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.685200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.685207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.685496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.685504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.685668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.685676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.685994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.686001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.686326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.686333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.686616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.686625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.686931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.686938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.687238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.687245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.687426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.687433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.687751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.687758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.687969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.395 [2024-11-19 11:25:38.687977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.395 qpair failed and we were unable to recover it. 00:31:30.395 [2024-11-19 11:25:38.688285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.688292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.688590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.688597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.688814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.688821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.689115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.689122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.689409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.689417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.689570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.689579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.689766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.689774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.689981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.689989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.690198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.690206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.690527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.690535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.690852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.690860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.691177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.691184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.691424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.691432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.691740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.691748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.692039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.692046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.692230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.692236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.692560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.692566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.692867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.692874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.693157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.693164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.693474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.693481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.693662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.693671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.693906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.693914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.694135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.694142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.694454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.694462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.694776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.694783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.695078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.695086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.695264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.695271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.695540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.695547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.695727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.695735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.695930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.695938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.696250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.696256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.696447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.696454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.396 qpair failed and we were unable to recover it. 00:31:30.396 [2024-11-19 11:25:38.696682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.396 [2024-11-19 11:25:38.696689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.697020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.697027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.697205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.697212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.697506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.697512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.697809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.697816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.698120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.698127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.698420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.698428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.698745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.698751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.699080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.699088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.699390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.699397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.699583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.699590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.699798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.699805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.699998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.700005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.700302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.700309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.700529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.700537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.700850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.397 [2024-11-19 11:25:38.700857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.397 qpair failed and we were unable to recover it. 00:31:30.397 [2024-11-19 11:25:38.701185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.701192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.701484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.701492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.701878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.701885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.702192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.702199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.702531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.702538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.702885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.702892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.703253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.703260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.703426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.703434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.703613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.703620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.703800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.703807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.704132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.704140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.704333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.704341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.704662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.704672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.704982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.704989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.705268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.705275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.705469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.705475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.705767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.705774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.706088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.706096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.706318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.706325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.706499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.706506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.706773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.706780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.707058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.707065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.707370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.707378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.707537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.707545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.707750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.707756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.708047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.708054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.708253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.708260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.708551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.708558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.708873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.708881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.676 [2024-11-19 11:25:38.709233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.676 [2024-11-19 11:25:38.709241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.676 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.709587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.709595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.709905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.709912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.710241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.710248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.710441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.710447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.710645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.710652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.710957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.710966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.711206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.711213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.711545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.711552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.711852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.711869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.712067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.712074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.712410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.712417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.712750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.712757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.713074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.713082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.713399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.713406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.713721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.713728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.714093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.714101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.714268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.714275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.714595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.714601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.714798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.714805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.715179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.715186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.715540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.715547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.715847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.715854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.716057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.716066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.716391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.716399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.716703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.716709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.717019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.717026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.717345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.717351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.717663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.717669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.717979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.717986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.718295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.718302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.718634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.718641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.718887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.718894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.719199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.719207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.719512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.719519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.719804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.719811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.720104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.720111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.720310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.677 [2024-11-19 11:25:38.720317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.677 qpair failed and we were unable to recover it. 00:31:30.677 [2024-11-19 11:25:38.720503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.720511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.720817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.720824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.721153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.721162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.721340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.721347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.721671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.721678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.721972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.721979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.722302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.722308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.722608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.722615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.722941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.722948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.723243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.723249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.723552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.723559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.723860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.723870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.724244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.724251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.724542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.724549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.724850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.724856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.725163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.725170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.725476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.725483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.725796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.725803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.726123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.726130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.726429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.726436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.726612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.726619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.726840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.726848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.727180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.727187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.727435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.727442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.727765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.727773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.727967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.727976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.728239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.728247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.728616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.728624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.728921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.728928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.729257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.729264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.729587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.729593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.729891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.729900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.730226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.730233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.730541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.730548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.730745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.730752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.731081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.731088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.731390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.731397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.678 [2024-11-19 11:25:38.731584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.678 [2024-11-19 11:25:38.731591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.678 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.731783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.731790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.732116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.732124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.732330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.732338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.732647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.732654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.732941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.732948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.733112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.733121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.733437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.733444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.733744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.733751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.734040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.734047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.734114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.734121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.734430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.734445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.734733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.734740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.735060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.735067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.735426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.735432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.735739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.735747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.736077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.736083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.736389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.736397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.736688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.736695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.737003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.737010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.737327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.737334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.737657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.737664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.737812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.737819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.738125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.738133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.738448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.738454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.738767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.738773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.738944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.738951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.739223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.739230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.739545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.739554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.739740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.739748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.740063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.740069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.740396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.740403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.740715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.740722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.741033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.741041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.741402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.741409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.741720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.741727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.742035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.742042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.742384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.742391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.679 qpair failed and we were unable to recover it. 00:31:30.679 [2024-11-19 11:25:38.742717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.679 [2024-11-19 11:25:38.742725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.742912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.742919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.743208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.743215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.743405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.743412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.743609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.743616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.743904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.743911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.744213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.744220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.744543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.744550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.744877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.744885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.745187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.745194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.745505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.745512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.745897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.745905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.746211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.746255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.746529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.746536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.746705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.746714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.747036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.747044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.747392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.747400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.747702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.747711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.748078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.748087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.748394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.748401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.748704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.748711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.749030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.749037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.749357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.749365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.749682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.749690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.750001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.750010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.750200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.750207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.750520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.750526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.750811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.750818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.751139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.751147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.751458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.751465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.751777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.751786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.752062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.752070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.680 [2024-11-19 11:25:38.752433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.680 [2024-11-19 11:25:38.752440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.680 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.752748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.752755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.753037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.753044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.753408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.753414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.753598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.753605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.753869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.753876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.754188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.754194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.754404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.754412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.754748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.754755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.755045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.755052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.755377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.755384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.755679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.755686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.756018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.756025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.756333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.756341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.756657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.756665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.756961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.756968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.757255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.757263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.757571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.757579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.757870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.757877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.758186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.758192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.758388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.758395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.758554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.758562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.758868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.758876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.759151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.759157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.759461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.759468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.759786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.759793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.760184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.760191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.760420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.760426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.760729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.760735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.761036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.761042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.761343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.761350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.761656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.761663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.761979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.761986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.762285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.762300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.762603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.762610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.762918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.762925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.763251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.763258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.763439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.681 [2024-11-19 11:25:38.763447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.681 qpair failed and we were unable to recover it. 00:31:30.681 [2024-11-19 11:25:38.763751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.763761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.764053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.764061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.764372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.764379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.764687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.764694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.764895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.764902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.765220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.765227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.765539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.765546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.765839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.765845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.766156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.766163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.766470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.766477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.766807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.766814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.767126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.767132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.767440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.767447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.767757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.767765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.767965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.767972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.768282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.768290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.768617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.768624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.768933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.768940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.769268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.769275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.769612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.769618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.769909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.769916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.770226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.770233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.770518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.770525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.770714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.770722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.771035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.771042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.771338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.771346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.771651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.771657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.771927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.771934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.772136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.772143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.772468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.772474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.772669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.772676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.772958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.772966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.773270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.773277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.773588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.773595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.773906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.773913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.774183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.774189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.774515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.774522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.682 [2024-11-19 11:25:38.774834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.682 [2024-11-19 11:25:38.774840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.682 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.775143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.775150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.775348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.775355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.775673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.775682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.775949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.775956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.776283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.776290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.776595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.776602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.776919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.776925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.777097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.777104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.777471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.777477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.777688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.777695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.777998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.778005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.778326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.778332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.778644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.778651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.778942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.778949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.779285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.779292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.779595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.779601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.779795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.779802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.779990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.779997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.780215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.780223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.780426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.780433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.780753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.780760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.780927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.780935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.781216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.781224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.781429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.781435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.781644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.781651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.781762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.781769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.782050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.782058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.782354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.782360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.782560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.782567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.782888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.782895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.783161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.783168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.783376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.783384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.783576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.783583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.783638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.783646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.783984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.783992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.784164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.784172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.784378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.784385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.683 qpair failed and we were unable to recover it. 00:31:30.683 [2024-11-19 11:25:38.784605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.683 [2024-11-19 11:25:38.784611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.784949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.784956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.785256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.785262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.785586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.785593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.785901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.785908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.786214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.786222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.786444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.786451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.786757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.786763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.786976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.786983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.787278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.787285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.787604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.787611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.787917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.787924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.788238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.788245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.788560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.788568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.788728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.788736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.789058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.789066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.789390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.789396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.789727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.789734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.790044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.790052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.790377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.790383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.790690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.790697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.790877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.790885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.791212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.791219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.791512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.791519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.791843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.791850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.792201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.792208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.792318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.792324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.792591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.792598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.792788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.792795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.793090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.793097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.793321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.793328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.793660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.793667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.793853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.793860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.794064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.794071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.794340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.794347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.794688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.794695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.794961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.794968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.684 qpair failed and we were unable to recover it. 00:31:30.684 [2024-11-19 11:25:38.795280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.684 [2024-11-19 11:25:38.795287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.795599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.795606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.795887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.795894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.796105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.796113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.796462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.796469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.796760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.796767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.797077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.797084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.797373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.797380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.797689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.797697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.798004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.798012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.798404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.798411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.798696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.798703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.798895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.798903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.799104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.799111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.799403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.799410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.799689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.799696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.799986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.799993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.800313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.800320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.800630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.800637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.800935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.800942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.801163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.801170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.801456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.801463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.801794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.801800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.802154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.802161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.802475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.802482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.802784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.802791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.803110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.803118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.803422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.803429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.803722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.803729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.804031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.804037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.804340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.804348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.804418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.804426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.804614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.804621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.685 qpair failed and we were unable to recover it. 00:31:30.685 [2024-11-19 11:25:38.804984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.685 [2024-11-19 11:25:38.804992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.805290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.805297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.805509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.805517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.805804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.805812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.806006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.806013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.806193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.806199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.806462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.806469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.806755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.806763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.807053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.807060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.807235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.807242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.807352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.807359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.807651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.807658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.807982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.807989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.808203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.808210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.808493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.808501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.808803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.808812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.809001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.809008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.809360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.809366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.809673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.809680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.809845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.809853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.810168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.810176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.810374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.810381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.810688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.810694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.810914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.810921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.811205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.811212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.811529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.811536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.811874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.811881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.812191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.812198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.812406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.812412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.812722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.812729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.813036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.813043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.813265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.813272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.813610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.813616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.813922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.813930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.814101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.814108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.814259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.814266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.814616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.814623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.686 [2024-11-19 11:25:38.814933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.686 [2024-11-19 11:25:38.814941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.686 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.815121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.815129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.815448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.815454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.815763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.815770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.816076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.816084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.816332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.816339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.816524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.816536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.816622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.816629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.816909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.816916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.817174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.817180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.817519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.817525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.817804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.817810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.818095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.818102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.818409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.818421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.818728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.818735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.819030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.819045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.819357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.819364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.819670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.819678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.819985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.819993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.820168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.820175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.820482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.820490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.820816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.820824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.821017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.821025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.821310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.821316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.821485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.821493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.821779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.821786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.821963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.821971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.822261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.822269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.822559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.822566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.822870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.822878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.823191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.823198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.823523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.823530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.823745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.823751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.824095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.824102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.824496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.824502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.824798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.824805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.824985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.824992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.825265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.825271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.687 qpair failed and we were unable to recover it. 00:31:30.687 [2024-11-19 11:25:38.825439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.687 [2024-11-19 11:25:38.825446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.825777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.825784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.826082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.826090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.826285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.826293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.826567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.826573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.826753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.826759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.827096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.827103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.827442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.827449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.827651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.827658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.827944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.827951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.828152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.828158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.828445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.828453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.828776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.828783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.829083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.829096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.829458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.829465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.829763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.829770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.830111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.830118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.830436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.830443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.830757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.830764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.831090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.831097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.831396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.831404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.831727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.831735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.832069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.832077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.832373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.832379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.832692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.832699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.833042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.833049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.833357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.833364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.833654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.833662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.833890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.833897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.834190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.834198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.834478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.834484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.834761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.834768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.834931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.834939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.835235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.835241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.835529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.835536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.835828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.835835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.836134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.836141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.836455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.688 [2024-11-19 11:25:38.836462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.688 qpair failed and we were unable to recover it. 00:31:30.688 [2024-11-19 11:25:38.836783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.836790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.836984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.836991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.837325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.837332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.837663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.837670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.837982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.837989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.838285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.838293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.838589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.838596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.838916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.838923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.839248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.839254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.839610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.839619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.839929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.839936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.840205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.840212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.840503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.840509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.840878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.840885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.841183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.841190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.841390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.841397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.841691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.841697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.841987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.841995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.842324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.842330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.842640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.842647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.842956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.842962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.843261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.843268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.843458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.843464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.843745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.843753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.844052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.844059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.844389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.844396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.844710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.844717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.845016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.845023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.845304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.845311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.845634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.845641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.846001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.846008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.846184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.846192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.846472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.846479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.846779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.846786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.847116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.847123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.847397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.847404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.689 qpair failed and we were unable to recover it. 00:31:30.689 [2024-11-19 11:25:38.847735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.689 [2024-11-19 11:25:38.847743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.847937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.847944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.848270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.848277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.848479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.848486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.848798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.848805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.849128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.849135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.849324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.849331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.849675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.849681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.849966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.849973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.850301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.850309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.850615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.850622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.850938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.850945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.851220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.851226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.851539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.851547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.851751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.851758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.851986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.851994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.852325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.852333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.852653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.852660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.852840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.852847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.853117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.853124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.853292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.853299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.853641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.853648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.853970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.853977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.854375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.854381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.854574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.854581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.854888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.854895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.855225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.855232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.855542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.855550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.855742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.855749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.856014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.856021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.856359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.856365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.856679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.856686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.856993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.857000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.857291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.857299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.857637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.690 [2024-11-19 11:25:38.857643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.690 qpair failed and we were unable to recover it. 00:31:30.690 [2024-11-19 11:25:38.857921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.857929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.858259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.858265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.858550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.858557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.858869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.858876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.859068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.859075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.859451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.859458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.859765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.859772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.860069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.860076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.860364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.860371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.860677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.860684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.860970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.860978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.861272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.861278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.861592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.861598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.861932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.861938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.862247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.862254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.862456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.862463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.862650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.862656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.862943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.862950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.863285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.863293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.863583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.863590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.863900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.863907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.864233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.864239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.864532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.864539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.864866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.864873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.865157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.865164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.865399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.865406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.865737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.865743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.866038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.866045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.866354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.866360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.866644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.866652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.866966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.866973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.867265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.867272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.691 qpair failed and we were unable to recover it. 00:31:30.691 [2024-11-19 11:25:38.867491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.691 [2024-11-19 11:25:38.867497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.867817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.867823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.868021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.868028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.868336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.868342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.868659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.868666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.868976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.868983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.869299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.869305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.869621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.869628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.869789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.869797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.870209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.870216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.870517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.870524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.870827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.870834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.871145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.871153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.871459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.871466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.871629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.871637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.872005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.872012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.872315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.872322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.872623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.872631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.872790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.872798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.873233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.873240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.873527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.873534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.873848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.873855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.874175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.874183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.874475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.874482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.874790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.874797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.875112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.875120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.875449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.875458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.875771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.875778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.876069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.876076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.876365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.876373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.876681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.876689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.876995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.877003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.877245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.877252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.877558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.877565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.877760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.877767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.878011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.878019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.878196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.878203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.878519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.692 [2024-11-19 11:25:38.878526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.692 qpair failed and we were unable to recover it. 00:31:30.692 [2024-11-19 11:25:38.878695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.878702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.878981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.878987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.879153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.879161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.879541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.879548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.879762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.879769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.880074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.880081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.880407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.880414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.880715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.880722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.880900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.880907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.881257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.881264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.881584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.881590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.881896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.881903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.882211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.882218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.882546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.882552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.882868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.882875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.883184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.883191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.883482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.883489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.883808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.883816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.884129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.884137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.884426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.884433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.884744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.884751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.885027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.885034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.885391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.885398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.885713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.885720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.885877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.885885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.886158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.886165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.886482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.886489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.886813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.886819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.887135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.887143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.887434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.887441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.887818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.887824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.888161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.888168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.888460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.888467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.888752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.888759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.889035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.889042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.889252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.889258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.889538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.693 [2024-11-19 11:25:38.889546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.693 qpair failed and we were unable to recover it. 00:31:30.693 [2024-11-19 11:25:38.889851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.889859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.890137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.890145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.890337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.890344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.890671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.890678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.890998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.891005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.891311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.891318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.891488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.891495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.891756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.891763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.892099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.892106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.892325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.892332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.892522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.892529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.892841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.892849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.893147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.893154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.893441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.893448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.893758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.893766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.893978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.893985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.894259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.894266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.894575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.894583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.894894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.894902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.895289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.895296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.895602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.895609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.895861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.895873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.896153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.896160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.896480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.896486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.896808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.896814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.897107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.897114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.897433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.897439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.897724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.897731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.898029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.898036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.898347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.898354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.898664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.898670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.898960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.898969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.899286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.694 [2024-11-19 11:25:38.899293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.694 qpair failed and we were unable to recover it. 00:31:30.694 [2024-11-19 11:25:38.899603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.899610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.899900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.899908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.900208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.900214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.900500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.900507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.900800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.900806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.901128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.901135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.901453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.901460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.901765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.901771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.902085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.902092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.902376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.902384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.902577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.902583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.902758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.902765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.903072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.903079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.903402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.903409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.903722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.903729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.904037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.904043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.904340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.904348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.904652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.904659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.904969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.904976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.905295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.905301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.905610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.905616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.905797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.905804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.906094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.906102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.906388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.906395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.906699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.906707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.906999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.907006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.907309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.907317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.907508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.907514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.907662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.907670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.908053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.908060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.908371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.908378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.908699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.908706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.909020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.909028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.909349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.909356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.909679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.909685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.909994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.910001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.910296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.910303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.695 [2024-11-19 11:25:38.910599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.695 [2024-11-19 11:25:38.910605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.695 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.910912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.910921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.911239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.911246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.911524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.911531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.911710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.911717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.912003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.912010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.912336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.912344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.912657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.912664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.912857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.912867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.913050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.913058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.913270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.913277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.913539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.913546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.913842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.913849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.914150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.914157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.914483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.914489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.914854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.914861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.915161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.915168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.915475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.915482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.915773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.915780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.916101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.916108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.916417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.916424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.916641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.916648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.916963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.916970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.917291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.917297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.917492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.917499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.917790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.917797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.918065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.918072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.918242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.918250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.918434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.918441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.918590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.918598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.918941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.918948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.919251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.919258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.919588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.919595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.919908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.919915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.920241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.920248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.920559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.920566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.920761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.920767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.921067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.696 [2024-11-19 11:25:38.921074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.696 qpair failed and we were unable to recover it. 00:31:30.696 [2024-11-19 11:25:38.921452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.921459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.921666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.921673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.921732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.921739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.922090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.922265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.922272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.922583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.922590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.922892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.922899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.923186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.923192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.923476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.923483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.923786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.923792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.924088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.924095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.924413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.924420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.924722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.924729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.924926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.924933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.925333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.925339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.925649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.925656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.925945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.925952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.926265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.926272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.926456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.926463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.926734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.926741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.927026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.927033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.927348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.927354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.927573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.927579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.927767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.927773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.928062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.928069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.928273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.928280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.928443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.928451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.928732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.928738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.929030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.929038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.929201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.929208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.929517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.929524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.929804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.929811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.930118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.930125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.930334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.930342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.930670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.930678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.930996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.931003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.931333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.931340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.697 [2024-11-19 11:25:38.931644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.697 [2024-11-19 11:25:38.931651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.697 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.931828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.931836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.932094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.932101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.932402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.932409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.932718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.932724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.932916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.932923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.933218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.933228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.933546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.933553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.933872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.933880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.934174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.934181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.934474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.934481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.934786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.934793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.934953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.934961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.935332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.935338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.935630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.935637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.935964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.935971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.936286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.936292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.936598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.936604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.936897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.936904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.937220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.937227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.937581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.937588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.937883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.937890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.938200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.938207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.938397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.938403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.938641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.938648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.938974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.938981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.939096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.939103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.939393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.939400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.939678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.939685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.939974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.939989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.940291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.940297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.940587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.940595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.940799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.940806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.941124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.941132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.941491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.941498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.941808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.941815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.942106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.942113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.942414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.942421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.698 qpair failed and we were unable to recover it. 00:31:30.698 [2024-11-19 11:25:38.942633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.698 [2024-11-19 11:25:38.942640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.942923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.942930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.943304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.943312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.943621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.943628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.943943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.943951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.944325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.944331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.944638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.944644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.944929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.944936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.945244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.945252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.945558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.945566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.945876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.945883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.946175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.946182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.946352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.946360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.946629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.946636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.946853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.946859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.947172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.947179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.947399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.947406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.947724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.947730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.948035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.948043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.948304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.948310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.948616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.948622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.948814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.948820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.949130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.949137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.949441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.949448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.949663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.949670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.949832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.949840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.950142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.950149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.950439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.950453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.950795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.950802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.951096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.951104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.951411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.951418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.951768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.951776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.952088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.952095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.699 [2024-11-19 11:25:38.952411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.699 [2024-11-19 11:25:38.952418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.699 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.952631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.952638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.952969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.952977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.953385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.953391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.953717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.953725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.954016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.954023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.954335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.954342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.954686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.954693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.955042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.955049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.955369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.955376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.955592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.955599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.955901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.955908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.956226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.956232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.956402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.956410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.956745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.956752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.957043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.957051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.957372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.957379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.957736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.957743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.958077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.958084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.958409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.958416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.958708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.958716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.959072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.959079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.959394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.959401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.959726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.959732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.960042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.960049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.960249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.960256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.960474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.960480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.960752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.960760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.961060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.961067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.961373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.961380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.961689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.961696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.961999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.962006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.962169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.962177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.962503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.962510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.962826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.962833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.963142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.963149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.963465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.963472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.963788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.963795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.700 qpair failed and we were unable to recover it. 00:31:30.700 [2024-11-19 11:25:38.964128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.700 [2024-11-19 11:25:38.964135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.964421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.964428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.964580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.964588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.964782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.964788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.965116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.965123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.965293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.965300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.965500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.965506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.965836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.965842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.966164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.966428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.966434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.966732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.966739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.967038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.967045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.967375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.967383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.967674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.967682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.967884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.967891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.968191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.968197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.968509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.968516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.968822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.968830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.969123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.969130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.969437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.969444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.969758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.969765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.970091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.970098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.970424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.970430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.970740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.970747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.971053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.971060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.971341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.971349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.971656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.971663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.971960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.971967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.972287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.972294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.972456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.972464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.972738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.972746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.973041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.973048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.973263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.973269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.973568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.973575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.973976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.973982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.974306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.974312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.974606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.974612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.974794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.701 [2024-11-19 11:25:38.974801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.701 qpair failed and we were unable to recover it. 00:31:30.701 [2024-11-19 11:25:38.975133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.975140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.975446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.975453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.975737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.975745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.976046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.976053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.976354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.976361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.976566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.976574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.976886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.976893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.977185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.977192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.977508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.977515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.977801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.977818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.978005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.978012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.978316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.978323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.978620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.978627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.978939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.978946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.979135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.979142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.979458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.979465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.979756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.979763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.980017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.980024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.980353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.980360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.980657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.980665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.981055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.981062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.981414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.981421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.981720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.981727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.982038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.982045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.982369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.982375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.982531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.982539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.982820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.982827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.983042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.983049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.983356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.983362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.983684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.983691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.984008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.984015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.984300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.984307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.984614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.984620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.984938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.984946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.985265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.985584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.985591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.985777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.985784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.702 [2024-11-19 11:25:38.986107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.702 [2024-11-19 11:25:38.986114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.702 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.986411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.986418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.986735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.986742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.987034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.987042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.987364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.987372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.987529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.987537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.987722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.987729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.988039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.988046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.988368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.988375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.988685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.988694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.988854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.988864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.989182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.989189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.989388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.989394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.989708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.989714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.990036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.990044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.990371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.990378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.990510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.990517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.990848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.990854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.991055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.991062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.991278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.991286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.991596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.991602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.991888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.991895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.992253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.992260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.992570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.992577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.992881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.992888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.993232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.993239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.993524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.993530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.993564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.993572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.993855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.993865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.994024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.994032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.994224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.994231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.994429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.994437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.994770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.994777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.995091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.995099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.995407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.995413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.703 qpair failed and we were unable to recover it. 00:31:30.703 [2024-11-19 11:25:38.995789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.703 [2024-11-19 11:25:38.995796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.996090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.996097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.996419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.996426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.996614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.996621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.996913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.996920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.997238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.997245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.997545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.997552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.997864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.997872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.998169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.998177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.998535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.998543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.998867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.998874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.999082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.999089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.999511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.999518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:38.999802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:38.999809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.000120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.000129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.000416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.000423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.000731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.000737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.001024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.001031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.001357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.001364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.001529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.001537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.001752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.001760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.002065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.002072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.002347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.002354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.002681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.002689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.003032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.003038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.003330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.003338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.003532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.003538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.003815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.003821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.004142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.004150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.004434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.004441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.004749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.004756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.005060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.005067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.005279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.005286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.005630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.005637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.005843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.005849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.006165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.006172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.704 [2024-11-19 11:25:39.006502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.704 [2024-11-19 11:25:39.006510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.704 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.006817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.006824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.007108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.007116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.007405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.007411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.007725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.007732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.008041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.008048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.008346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.008352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.008573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.008579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.008860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.008872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.009181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.009188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.705 qpair failed and we were unable to recover it. 00:31:30.705 [2024-11-19 11:25:39.009465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.705 [2024-11-19 11:25:39.009472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.009776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.009785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.010095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.010102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.010412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.010419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.010729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.010736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.011031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.011038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.011337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.011344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.011670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.011678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.011869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.011880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.012161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.012168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.012485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.012492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.012814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.012821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.013108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.982 [2024-11-19 11:25:39.013122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.982 qpair failed and we were unable to recover it. 00:31:30.982 [2024-11-19 11:25:39.013400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.013406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.013692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.013699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.014011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.014018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.014314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.014322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.014611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.014619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.014933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.014940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.015126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.015134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.015493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.015501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.015803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.015809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.016132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.983 [2024-11-19 11:25:39.016139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.983 qpair failed and we were unable to recover it. 00:31:30.983 [2024-11-19 11:25:39.016434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.984 [2024-11-19 11:25:39.016441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.984 qpair failed and we were unable to recover it. 00:31:30.984 [2024-11-19 11:25:39.016748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.984 [2024-11-19 11:25:39.016755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.984 qpair failed and we were unable to recover it. 00:31:30.984 [2024-11-19 11:25:39.017057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.984 [2024-11-19 11:25:39.017064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.984 qpair failed and we were unable to recover it. 00:31:30.984 [2024-11-19 11:25:39.017395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.984 [2024-11-19 11:25:39.017402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.984 qpair failed and we were unable to recover it. 00:31:30.985 [2024-11-19 11:25:39.017720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.985 [2024-11-19 11:25:39.017727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.985 qpair failed and we were unable to recover it. 00:31:30.985 [2024-11-19 11:25:39.018037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.985 [2024-11-19 11:25:39.018044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.985 qpair failed and we were unable to recover it. 00:31:30.985 [2024-11-19 11:25:39.018356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.985 [2024-11-19 11:25:39.018364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.985 qpair failed and we were unable to recover it. 00:31:30.985 [2024-11-19 11:25:39.018679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.985 [2024-11-19 11:25:39.018687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.985 qpair failed and we were unable to recover it. 00:31:30.985 [2024-11-19 11:25:39.019001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.985 [2024-11-19 11:25:39.019007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.985 qpair failed and we were unable to recover it. 00:31:30.985 [2024-11-19 11:25:39.019391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.985 [2024-11-19 11:25:39.019398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.985 qpair failed and we were unable to recover it. 00:31:30.986 [2024-11-19 11:25:39.019706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.986 [2024-11-19 11:25:39.019713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.986 qpair failed and we were unable to recover it. 00:31:30.986 [2024-11-19 11:25:39.019998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.986 [2024-11-19 11:25:39.020005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.986 qpair failed and we were unable to recover it. 00:31:30.986 [2024-11-19 11:25:39.020329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.986 [2024-11-19 11:25:39.020335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.986 qpair failed and we were unable to recover it. 00:31:30.986 [2024-11-19 11:25:39.020647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.986 [2024-11-19 11:25:39.020654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.986 qpair failed and we were unable to recover it. 00:31:30.986 [2024-11-19 11:25:39.020933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.986 [2024-11-19 11:25:39.020940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.986 qpair failed and we were unable to recover it. 00:31:30.986 [2024-11-19 11:25:39.021240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.987 [2024-11-19 11:25:39.021247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.987 qpair failed and we were unable to recover it. 00:31:30.987 [2024-11-19 11:25:39.021558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.987 [2024-11-19 11:25:39.021565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.987 qpair failed and we were unable to recover it. 00:31:30.987 [2024-11-19 11:25:39.021880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.987 [2024-11-19 11:25:39.021887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.987 qpair failed and we were unable to recover it. 00:31:30.987 [2024-11-19 11:25:39.022195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.987 [2024-11-19 11:25:39.022201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.987 qpair failed and we were unable to recover it. 00:31:30.987 [2024-11-19 11:25:39.022476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.987 [2024-11-19 11:25:39.022482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.987 qpair failed and we were unable to recover it. 00:31:30.987 [2024-11-19 11:25:39.022798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.987 [2024-11-19 11:25:39.022805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.987 qpair failed and we were unable to recover it. 00:31:30.987 [2024-11-19 11:25:39.023006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.987 [2024-11-19 11:25:39.023013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.988 qpair failed and we were unable to recover it. 00:31:30.988 [2024-11-19 11:25:39.023362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.988 [2024-11-19 11:25:39.023368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.988 qpair failed and we were unable to recover it. 00:31:30.988 [2024-11-19 11:25:39.023653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.988 [2024-11-19 11:25:39.023661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.988 qpair failed and we were unable to recover it. 00:31:30.988 [2024-11-19 11:25:39.023957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.988 [2024-11-19 11:25:39.023964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.988 qpair failed and we were unable to recover it. 00:31:30.988 [2024-11-19 11:25:39.024264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.988 [2024-11-19 11:25:39.024273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.988 qpair failed and we were unable to recover it. 00:31:30.988 [2024-11-19 11:25:39.024583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.989 [2024-11-19 11:25:39.024589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.989 qpair failed and we were unable to recover it. 00:31:30.989 [2024-11-19 11:25:39.024724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.989 [2024-11-19 11:25:39.024730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.989 qpair failed and we were unable to recover it. 00:31:30.989 [2024-11-19 11:25:39.024992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.989 [2024-11-19 11:25:39.024999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.989 qpair failed and we were unable to recover it. 00:31:30.989 [2024-11-19 11:25:39.025331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.989 [2024-11-19 11:25:39.025338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.989 qpair failed and we were unable to recover it. 00:31:30.990 [2024-11-19 11:25:39.025554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.990 [2024-11-19 11:25:39.025561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.990 qpair failed and we were unable to recover it. 00:31:30.990 [2024-11-19 11:25:39.025761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.990 [2024-11-19 11:25:39.025768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.990 qpair failed and we were unable to recover it. 00:31:30.990 [2024-11-19 11:25:39.026075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.990 [2024-11-19 11:25:39.026082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.990 qpair failed and we were unable to recover it. 00:31:30.990 [2024-11-19 11:25:39.026354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.990 [2024-11-19 11:25:39.026362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.990 qpair failed and we were unable to recover it. 00:31:30.990 [2024-11-19 11:25:39.026696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.990 [2024-11-19 11:25:39.026702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.990 qpair failed and we were unable to recover it. 00:31:30.990 [2024-11-19 11:25:39.027020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.990 [2024-11-19 11:25:39.027034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.990 qpair failed and we were unable to recover it. 00:31:30.991 [2024-11-19 11:25:39.027348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.991 [2024-11-19 11:25:39.027355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.991 qpair failed and we were unable to recover it. 00:31:30.991 [2024-11-19 11:25:39.027643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.991 [2024-11-19 11:25:39.027650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.991 qpair failed and we were unable to recover it. 00:31:30.991 [2024-11-19 11:25:39.027976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.991 [2024-11-19 11:25:39.027983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.991 qpair failed and we were unable to recover it. 00:31:30.991 [2024-11-19 11:25:39.028293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.991 [2024-11-19 11:25:39.028301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.991 qpair failed and we were unable to recover it. 00:31:30.991 [2024-11-19 11:25:39.028612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.991 [2024-11-19 11:25:39.028618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.991 qpair failed and we were unable to recover it. 00:31:30.991 [2024-11-19 11:25:39.028935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.991 [2024-11-19 11:25:39.028942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.991 qpair failed and we were unable to recover it. 00:31:30.992 [2024-11-19 11:25:39.029238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.992 [2024-11-19 11:25:39.029245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.992 qpair failed and we were unable to recover it. 00:31:30.992 [2024-11-19 11:25:39.029557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.992 [2024-11-19 11:25:39.029565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.992 qpair failed and we were unable to recover it. 00:31:30.992 [2024-11-19 11:25:39.029884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.992 [2024-11-19 11:25:39.029891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.992 qpair failed and we were unable to recover it. 00:31:30.992 [2024-11-19 11:25:39.030201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.992 [2024-11-19 11:25:39.030208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.992 qpair failed and we were unable to recover it. 00:31:30.992 [2024-11-19 11:25:39.030526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.993 [2024-11-19 11:25:39.030533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.993 qpair failed and we were unable to recover it. 00:31:30.993 [2024-11-19 11:25:39.030845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.993 [2024-11-19 11:25:39.030852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.993 qpair failed and we were unable to recover it. 00:31:30.993 [2024-11-19 11:25:39.031182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.993 [2024-11-19 11:25:39.031189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.993 qpair failed and we were unable to recover it. 00:31:30.993 [2024-11-19 11:25:39.031381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.993 [2024-11-19 11:25:39.031388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.993 qpair failed and we were unable to recover it. 00:31:30.993 [2024-11-19 11:25:39.031682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.993 [2024-11-19 11:25:39.031690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.993 qpair failed and we were unable to recover it. 00:31:30.993 [2024-11-19 11:25:39.031908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.993 [2024-11-19 11:25:39.031916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.993 qpair failed and we were unable to recover it. 00:31:30.993 [2024-11-19 11:25:39.032098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.994 [2024-11-19 11:25:39.032105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.994 qpair failed and we were unable to recover it. 00:31:30.994 [2024-11-19 11:25:39.032277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.994 [2024-11-19 11:25:39.032285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.994 qpair failed and we were unable to recover it. 00:31:30.994 [2024-11-19 11:25:39.032601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.994 [2024-11-19 11:25:39.032607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.994 qpair failed and we were unable to recover it. 00:31:30.994 [2024-11-19 11:25:39.032916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.994 [2024-11-19 11:25:39.032923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.994 qpair failed and we were unable to recover it. 00:31:30.994 [2024-11-19 11:25:39.033253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.994 [2024-11-19 11:25:39.033260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.994 qpair failed and we were unable to recover it. 00:31:30.994 [2024-11-19 11:25:39.033621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.994 [2024-11-19 11:25:39.033628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.994 qpair failed and we were unable to recover it. 00:31:30.994 [2024-11-19 11:25:39.033961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.994 [2024-11-19 11:25:39.033969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.994 qpair failed and we were unable to recover it. 00:31:30.994 [2024-11-19 11:25:39.034293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.994 [2024-11-19 11:25:39.034299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.995 qpair failed and we were unable to recover it. 00:31:30.995 [2024-11-19 11:25:39.034586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.995 [2024-11-19 11:25:39.034593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.995 qpair failed and we were unable to recover it. 00:31:30.995 [2024-11-19 11:25:39.034913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.995 [2024-11-19 11:25:39.034920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.995 qpair failed and we were unable to recover it. 00:31:30.995 [2024-11-19 11:25:39.035262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.995 [2024-11-19 11:25:39.035269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.995 qpair failed and we were unable to recover it. 00:31:30.995 [2024-11-19 11:25:39.035565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.995 [2024-11-19 11:25:39.035572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.996 qpair failed and we were unable to recover it. 00:31:30.996 [2024-11-19 11:25:39.035752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.996 [2024-11-19 11:25:39.035760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.996 qpair failed and we were unable to recover it. 00:31:30.996 [2024-11-19 11:25:39.036051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.996 [2024-11-19 11:25:39.036060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.996 qpair failed and we were unable to recover it. 00:31:30.996 [2024-11-19 11:25:39.036355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.996 [2024-11-19 11:25:39.036362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.996 qpair failed and we were unable to recover it. 00:31:30.996 [2024-11-19 11:25:39.036672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.996 [2024-11-19 11:25:39.036679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.996 qpair failed and we were unable to recover it. 00:31:30.996 [2024-11-19 11:25:39.036843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.996 [2024-11-19 11:25:39.036850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.997 qpair failed and we were unable to recover it. 00:31:30.997 [2024-11-19 11:25:39.037162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.997 [2024-11-19 11:25:39.037169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.997 qpair failed and we were unable to recover it. 00:31:30.997 [2024-11-19 11:25:39.037332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.997 [2024-11-19 11:25:39.037340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.997 qpair failed and we were unable to recover it. 00:31:30.997 [2024-11-19 11:25:39.037631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.997 [2024-11-19 11:25:39.037639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.997 qpair failed and we were unable to recover it. 00:31:30.998 [2024-11-19 11:25:39.037939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.998 [2024-11-19 11:25:39.037946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.998 qpair failed and we were unable to recover it. 00:31:30.998 [2024-11-19 11:25:39.038270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.998 [2024-11-19 11:25:39.038277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.998 qpair failed and we were unable to recover it. 00:31:30.998 [2024-11-19 11:25:39.038440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.998 [2024-11-19 11:25:39.038447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.998 qpair failed and we were unable to recover it. 00:31:30.998 [2024-11-19 11:25:39.038774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.998 [2024-11-19 11:25:39.038781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.998 qpair failed and we were unable to recover it. 00:31:30.999 [2024-11-19 11:25:39.039101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.999 [2024-11-19 11:25:39.039109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.999 qpair failed and we were unable to recover it. 00:31:30.999 [2024-11-19 11:25:39.039433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.999 [2024-11-19 11:25:39.039440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.999 qpair failed and we were unable to recover it. 00:31:30.999 [2024-11-19 11:25:39.039638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.999 [2024-11-19 11:25:39.039645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.999 qpair failed and we were unable to recover it. 00:31:30.999 [2024-11-19 11:25:39.040007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.999 [2024-11-19 11:25:39.040015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.999 qpair failed and we were unable to recover it. 00:31:30.999 [2024-11-19 11:25:39.040313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.999 [2024-11-19 11:25:39.040320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:30.999 qpair failed and we were unable to recover it. 00:31:30.999 [2024-11-19 11:25:39.040535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.999 [2024-11-19 11:25:39.040541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.000 qpair failed and we were unable to recover it. 00:31:31.000 [2024-11-19 11:25:39.040877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.000 [2024-11-19 11:25:39.040885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.000 qpair failed and we were unable to recover it. 00:31:31.000 [2024-11-19 11:25:39.041225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.000 [2024-11-19 11:25:39.041232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.000 qpair failed and we were unable to recover it. 00:31:31.000 [2024-11-19 11:25:39.041465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.000 [2024-11-19 11:25:39.041472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.000 qpair failed and we were unable to recover it. 00:31:31.000 [2024-11-19 11:25:39.041857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.000 [2024-11-19 11:25:39.041867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.001 qpair failed and we were unable to recover it. 00:31:31.001 [2024-11-19 11:25:39.042076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.001 [2024-11-19 11:25:39.042083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.001 qpair failed and we were unable to recover it. 00:31:31.001 [2024-11-19 11:25:39.042423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.001 [2024-11-19 11:25:39.042430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.001 qpair failed and we were unable to recover it. 00:31:31.001 [2024-11-19 11:25:39.042744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.002 [2024-11-19 11:25:39.042751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.002 qpair failed and we were unable to recover it. 00:31:31.002 [2024-11-19 11:25:39.043055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.002 [2024-11-19 11:25:39.043062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.002 qpair failed and we were unable to recover it. 00:31:31.002 [2024-11-19 11:25:39.043411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.002 [2024-11-19 11:25:39.043418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.002 qpair failed and we were unable to recover it. 00:31:31.002 [2024-11-19 11:25:39.043604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.002 [2024-11-19 11:25:39.043610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.002 qpair failed and we were unable to recover it. 00:31:31.002 [2024-11-19 11:25:39.043814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.002 [2024-11-19 11:25:39.043821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.002 qpair failed and we were unable to recover it. 00:31:31.002 [2024-11-19 11:25:39.044183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.003 [2024-11-19 11:25:39.044192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.003 qpair failed and we were unable to recover it. 00:31:31.003 [2024-11-19 11:25:39.044510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.003 [2024-11-19 11:25:39.044517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.003 qpair failed and we were unable to recover it. 00:31:31.003 [2024-11-19 11:25:39.044834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.003 [2024-11-19 11:25:39.044840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.003 qpair failed and we were unable to recover it. 00:31:31.003 [2024-11-19 11:25:39.045009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.003 [2024-11-19 11:25:39.045017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.003 qpair failed and we were unable to recover it. 00:31:31.003 [2024-11-19 11:25:39.045254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.004 [2024-11-19 11:25:39.045261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.004 qpair failed and we were unable to recover it. 00:31:31.004 [2024-11-19 11:25:39.045441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.004 [2024-11-19 11:25:39.045449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.004 qpair failed and we were unable to recover it. 00:31:31.004 [2024-11-19 11:25:39.045765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.004 [2024-11-19 11:25:39.045772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.004 qpair failed and we were unable to recover it. 00:31:31.004 [2024-11-19 11:25:39.046075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.004 [2024-11-19 11:25:39.046082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.004 qpair failed and we were unable to recover it. 00:31:31.004 [2024-11-19 11:25:39.046360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.004 [2024-11-19 11:25:39.046367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.004 qpair failed and we were unable to recover it. 00:31:31.004 [2024-11-19 11:25:39.046570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.005 [2024-11-19 11:25:39.046578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.005 qpair failed and we were unable to recover it. 00:31:31.005 [2024-11-19 11:25:39.046728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.005 [2024-11-19 11:25:39.046736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.005 qpair failed and we were unable to recover it. 00:31:31.005 [2024-11-19 11:25:39.047011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.005 [2024-11-19 11:25:39.047018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.005 qpair failed and we were unable to recover it. 00:31:31.006 [2024-11-19 11:25:39.047371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.006 [2024-11-19 11:25:39.047379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.006 qpair failed and we were unable to recover it. 00:31:31.006 [2024-11-19 11:25:39.047689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.006 [2024-11-19 11:25:39.047697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.006 qpair failed and we were unable to recover it. 00:31:31.006 [2024-11-19 11:25:39.048007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.006 [2024-11-19 11:25:39.048014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.006 qpair failed and we were unable to recover it. 00:31:31.006 [2024-11-19 11:25:39.048393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.007 [2024-11-19 11:25:39.048400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.007 qpair failed and we were unable to recover it. 00:31:31.007 [2024-11-19 11:25:39.048810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.007 [2024-11-19 11:25:39.048817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.007 qpair failed and we were unable to recover it. 00:31:31.007 [2024-11-19 11:25:39.049129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.007 [2024-11-19 11:25:39.049136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.007 qpair failed and we were unable to recover it. 00:31:31.007 [2024-11-19 11:25:39.049340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.007 [2024-11-19 11:25:39.049347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.007 qpair failed and we were unable to recover it. 00:31:31.007 [2024-11-19 11:25:39.049532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.007 [2024-11-19 11:25:39.049540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.007 qpair failed and we were unable to recover it. 00:31:31.007 [2024-11-19 11:25:39.049845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.007 [2024-11-19 11:25:39.049852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.007 qpair failed and we were unable to recover it. 00:31:31.007 [2024-11-19 11:25:39.050148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.007 [2024-11-19 11:25:39.050156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.007 qpair failed and we were unable to recover it. 00:31:31.007 [2024-11-19 11:25:39.050354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.007 [2024-11-19 11:25:39.050361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.008 qpair failed and we were unable to recover it. 00:31:31.008 [2024-11-19 11:25:39.050670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.008 [2024-11-19 11:25:39.050676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.008 qpair failed and we were unable to recover it. 00:31:31.008 [2024-11-19 11:25:39.051041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.008 [2024-11-19 11:25:39.051048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.008 qpair failed and we were unable to recover it. 00:31:31.008 [2024-11-19 11:25:39.051401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.008 [2024-11-19 11:25:39.051408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.008 qpair failed and we were unable to recover it. 00:31:31.008 [2024-11-19 11:25:39.051703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.008 [2024-11-19 11:25:39.051710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.008 qpair failed and we were unable to recover it. 00:31:31.009 [2024-11-19 11:25:39.052102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.009 [2024-11-19 11:25:39.052109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.009 qpair failed and we were unable to recover it. 00:31:31.009 [2024-11-19 11:25:39.052397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.009 [2024-11-19 11:25:39.052404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.009 qpair failed and we were unable to recover it. 00:31:31.009 [2024-11-19 11:25:39.052605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.009 [2024-11-19 11:25:39.052613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.009 qpair failed and we were unable to recover it. 00:31:31.009 [2024-11-19 11:25:39.052938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.009 [2024-11-19 11:25:39.052945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.009 qpair failed and we were unable to recover it. 00:31:31.009 [2024-11-19 11:25:39.053126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.009 [2024-11-19 11:25:39.053133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.009 qpair failed and we were unable to recover it. 00:31:31.009 [2024-11-19 11:25:39.053426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.010 [2024-11-19 11:25:39.053433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.010 qpair failed and we were unable to recover it. 00:31:31.010 [2024-11-19 11:25:39.053761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.010 [2024-11-19 11:25:39.053767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.010 qpair failed and we were unable to recover it. 00:31:31.010 [2024-11-19 11:25:39.053953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.010 [2024-11-19 11:25:39.053960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.010 qpair failed and we were unable to recover it. 00:31:31.010 [2024-11-19 11:25:39.054265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.010 [2024-11-19 11:25:39.054272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.010 qpair failed and we were unable to recover it. 00:31:31.010 [2024-11-19 11:25:39.054580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.010 [2024-11-19 11:25:39.054588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.010 qpair failed and we were unable to recover it. 00:31:31.010 [2024-11-19 11:25:39.054898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.010 [2024-11-19 11:25:39.054906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.010 qpair failed and we were unable to recover it. 00:31:31.010 [2024-11-19 11:25:39.055113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.011 [2024-11-19 11:25:39.055121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.011 qpair failed and we were unable to recover it. 00:31:31.011 [2024-11-19 11:25:39.055411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.011 [2024-11-19 11:25:39.055417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.011 qpair failed and we were unable to recover it. 00:31:31.011 [2024-11-19 11:25:39.055734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.011 [2024-11-19 11:25:39.055741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.011 qpair failed and we were unable to recover it. 00:31:31.011 [2024-11-19 11:25:39.056043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.011 [2024-11-19 11:25:39.056050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.011 qpair failed and we were unable to recover it. 00:31:31.011 [2024-11-19 11:25:39.056352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.011 [2024-11-19 11:25:39.056359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.012 qpair failed and we were unable to recover it. 00:31:31.012 [2024-11-19 11:25:39.056672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.012 [2024-11-19 11:25:39.056679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.012 qpair failed and we were unable to recover it. 00:31:31.012 [2024-11-19 11:25:39.057003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.012 [2024-11-19 11:25:39.057010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.012 qpair failed and we were unable to recover it. 00:31:31.012 [2024-11-19 11:25:39.057328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.012 [2024-11-19 11:25:39.057335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.012 qpair failed and we were unable to recover it. 00:31:31.012 [2024-11-19 11:25:39.057530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.012 [2024-11-19 11:25:39.057537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.012 qpair failed and we were unable to recover it. 00:31:31.012 [2024-11-19 11:25:39.057784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.012 [2024-11-19 11:25:39.057792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.012 qpair failed and we were unable to recover it. 00:31:31.012 [2024-11-19 11:25:39.058109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.013 [2024-11-19 11:25:39.058117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.013 qpair failed and we were unable to recover it. 00:31:31.013 [2024-11-19 11:25:39.058456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.013 [2024-11-19 11:25:39.058463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.013 qpair failed and we were unable to recover it. 00:31:31.013 [2024-11-19 11:25:39.058625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.013 [2024-11-19 11:25:39.058632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.013 qpair failed and we were unable to recover it. 00:31:31.013 [2024-11-19 11:25:39.058908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.014 [2024-11-19 11:25:39.058915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.014 qpair failed and we were unable to recover it. 00:31:31.014 [2024-11-19 11:25:39.059241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.014 [2024-11-19 11:25:39.059250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.014 qpair failed and we were unable to recover it. 00:31:31.014 [2024-11-19 11:25:39.059536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.014 [2024-11-19 11:25:39.059544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.015 qpair failed and we were unable to recover it. 00:31:31.015 [2024-11-19 11:25:39.059837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.015 [2024-11-19 11:25:39.059844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.015 qpair failed and we were unable to recover it. 00:31:31.015 [2024-11-19 11:25:39.060067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.015 [2024-11-19 11:25:39.060074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.015 qpair failed and we were unable to recover it. 00:31:31.016 [2024-11-19 11:25:39.060390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.016 [2024-11-19 11:25:39.060396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.016 qpair failed and we were unable to recover it. 00:31:31.016 [2024-11-19 11:25:39.060691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.016 [2024-11-19 11:25:39.060699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.016 qpair failed and we were unable to recover it. 00:31:31.016 [2024-11-19 11:25:39.061021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.016 [2024-11-19 11:25:39.061028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.016 qpair failed and we were unable to recover it. 00:31:31.016 [2024-11-19 11:25:39.061210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.016 [2024-11-19 11:25:39.061217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.016 qpair failed and we were unable to recover it. 00:31:31.016 [2024-11-19 11:25:39.061414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.016 [2024-11-19 11:25:39.061421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.017 qpair failed and we were unable to recover it. 00:31:31.017 [2024-11-19 11:25:39.061596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.017 [2024-11-19 11:25:39.061604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.017 qpair failed and we were unable to recover it. 00:31:31.017 [2024-11-19 11:25:39.061906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.017 [2024-11-19 11:25:39.061913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.017 qpair failed and we were unable to recover it. 00:31:31.017 [2024-11-19 11:25:39.062257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.017 [2024-11-19 11:25:39.062265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.017 qpair failed and we were unable to recover it. 00:31:31.017 [2024-11-19 11:25:39.062563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.017 [2024-11-19 11:25:39.062571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.018 qpair failed and we were unable to recover it. 00:31:31.018 [2024-11-19 11:25:39.062771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.018 [2024-11-19 11:25:39.062779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.018 qpair failed and we were unable to recover it. 00:31:31.018 [2024-11-19 11:25:39.063115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.018 [2024-11-19 11:25:39.063122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.018 qpair failed and we were unable to recover it. 00:31:31.018 [2024-11-19 11:25:39.063434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.018 [2024-11-19 11:25:39.063441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.018 qpair failed and we were unable to recover it. 00:31:31.018 [2024-11-19 11:25:39.063617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.019 [2024-11-19 11:25:39.063624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.019 qpair failed and we were unable to recover it. 00:31:31.019 [2024-11-19 11:25:39.063986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.019 [2024-11-19 11:25:39.063993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.019 qpair failed and we were unable to recover it. 00:31:31.019 [2024-11-19 11:25:39.064155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.019 [2024-11-19 11:25:39.064161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.019 qpair failed and we were unable to recover it. 00:31:31.019 [2024-11-19 11:25:39.064223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.020 [2024-11-19 11:25:39.064229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.020 qpair failed and we were unable to recover it. 00:31:31.020 [2024-11-19 11:25:39.064562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.020 [2024-11-19 11:25:39.064569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.020 qpair failed and we were unable to recover it. 00:31:31.020 [2024-11-19 11:25:39.064874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.020 [2024-11-19 11:25:39.064881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.020 qpair failed and we were unable to recover it. 00:31:31.020 [2024-11-19 11:25:39.065042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.020 [2024-11-19 11:25:39.065049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.020 qpair failed and we were unable to recover it. 00:31:31.020 [2024-11-19 11:25:39.065313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.020 [2024-11-19 11:25:39.065321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.020 qpair failed and we were unable to recover it. 00:31:31.020 [2024-11-19 11:25:39.065617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.020 [2024-11-19 11:25:39.065625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.020 qpair failed and we were unable to recover it. 00:31:31.020 [2024-11-19 11:25:39.065918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.020 [2024-11-19 11:25:39.065925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.020 qpair failed and we were unable to recover it. 00:31:31.020 [2024-11-19 11:25:39.066227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.020 [2024-11-19 11:25:39.066234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.021 qpair failed and we were unable to recover it. 00:31:31.021 [2024-11-19 11:25:39.066532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.021 [2024-11-19 11:25:39.066539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.021 qpair failed and we were unable to recover it. 00:31:31.021 [2024-11-19 11:25:39.066705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.021 [2024-11-19 11:25:39.066712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.021 qpair failed and we were unable to recover it. 00:31:31.021 [2024-11-19 11:25:39.066924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.021 [2024-11-19 11:25:39.066931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.021 qpair failed and we were unable to recover it. 00:31:31.021 [2024-11-19 11:25:39.067226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.021 [2024-11-19 11:25:39.067233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.021 qpair failed and we were unable to recover it. 00:31:31.021 [2024-11-19 11:25:39.067450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.021 [2024-11-19 11:25:39.067457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.021 qpair failed and we were unable to recover it. 00:31:31.021 [2024-11-19 11:25:39.067673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.021 [2024-11-19 11:25:39.067681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.021 qpair failed and we were unable to recover it. 00:31:31.021 [2024-11-19 11:25:39.068006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.021 [2024-11-19 11:25:39.068013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.021 qpair failed and we were unable to recover it. 00:31:31.022 [2024-11-19 11:25:39.068328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.022 [2024-11-19 11:25:39.068335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.022 qpair failed and we were unable to recover it. 00:31:31.022 [2024-11-19 11:25:39.068643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.022 [2024-11-19 11:25:39.068650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.022 qpair failed and we were unable to recover it. 00:31:31.022 [2024-11-19 11:25:39.068951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.022 [2024-11-19 11:25:39.068958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.022 qpair failed and we were unable to recover it. 00:31:31.022 [2024-11-19 11:25:39.069249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.022 [2024-11-19 11:25:39.069255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.022 qpair failed and we were unable to recover it. 00:31:31.022 [2024-11-19 11:25:39.069555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.023 [2024-11-19 11:25:39.069561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.023 qpair failed and we were unable to recover it. 00:31:31.023 [2024-11-19 11:25:39.069784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.023 [2024-11-19 11:25:39.069791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.023 qpair failed and we were unable to recover it. 00:31:31.023 [2024-11-19 11:25:39.070065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.023 [2024-11-19 11:25:39.070075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.023 qpair failed and we were unable to recover it. 00:31:31.023 [2024-11-19 11:25:39.070459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.023 [2024-11-19 11:25:39.070467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.023 qpair failed and we were unable to recover it. 00:31:31.023 [2024-11-19 11:25:39.070765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.023 [2024-11-19 11:25:39.070772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.023 qpair failed and we were unable to recover it. 00:31:31.024 [2024-11-19 11:25:39.071113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.024 [2024-11-19 11:25:39.071121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.024 qpair failed and we were unable to recover it. 00:31:31.024 [2024-11-19 11:25:39.071279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.024 [2024-11-19 11:25:39.071287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.024 qpair failed and we were unable to recover it. 00:31:31.024 [2024-11-19 11:25:39.071570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.024 [2024-11-19 11:25:39.071578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.024 qpair failed and we were unable to recover it. 00:31:31.024 [2024-11-19 11:25:39.071750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.024 [2024-11-19 11:25:39.071758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.024 qpair failed and we were unable to recover it. 00:31:31.024 [2024-11-19 11:25:39.071956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.024 [2024-11-19 11:25:39.071964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.024 qpair failed and we were unable to recover it. 00:31:31.024 [2024-11-19 11:25:39.072193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.024 [2024-11-19 11:25:39.072201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.024 qpair failed and we were unable to recover it. 00:31:31.025 [2024-11-19 11:25:39.072509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.025 [2024-11-19 11:25:39.072516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.025 qpair failed and we were unable to recover it. 00:31:31.025 [2024-11-19 11:25:39.072615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.025 [2024-11-19 11:25:39.072622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.025 qpair failed and we were unable to recover it. 00:31:31.025 [2024-11-19 11:25:39.072886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.025 [2024-11-19 11:25:39.072894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.025 qpair failed and we were unable to recover it. 00:31:31.025 [2024-11-19 11:25:39.073247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.025 [2024-11-19 11:25:39.073254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.025 qpair failed and we were unable to recover it. 00:31:31.025 [2024-11-19 11:25:39.073566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.026 [2024-11-19 11:25:39.073573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.026 qpair failed and we were unable to recover it. 00:31:31.026 [2024-11-19 11:25:39.073735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.026 [2024-11-19 11:25:39.073743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.026 qpair failed and we were unable to recover it. 00:31:31.026 [2024-11-19 11:25:39.074023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.026 [2024-11-19 11:25:39.074030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.026 qpair failed and we were unable to recover it. 00:31:31.026 [2024-11-19 11:25:39.074332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.026 [2024-11-19 11:25:39.074344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.026 qpair failed and we were unable to recover it. 00:31:31.026 [2024-11-19 11:25:39.074662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.026 [2024-11-19 11:25:39.074668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.026 qpair failed and we were unable to recover it. 00:31:31.026 [2024-11-19 11:25:39.074976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.027 [2024-11-19 11:25:39.074985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.027 qpair failed and we were unable to recover it. 00:31:31.027 [2024-11-19 11:25:39.075294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.027 [2024-11-19 11:25:39.075300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.027 qpair failed and we were unable to recover it. 00:31:31.027 [2024-11-19 11:25:39.075624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.027 [2024-11-19 11:25:39.075631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.027 qpair failed and we were unable to recover it. 00:31:31.027 [2024-11-19 11:25:39.075974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.027 [2024-11-19 11:25:39.075981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.027 qpair failed and we were unable to recover it. 00:31:31.027 [2024-11-19 11:25:39.076179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.027 [2024-11-19 11:25:39.076186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.027 qpair failed and we were unable to recover it. 00:31:31.027 [2024-11-19 11:25:39.076415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.028 [2024-11-19 11:25:39.076421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.028 qpair failed and we were unable to recover it. 00:31:31.028 [2024-11-19 11:25:39.076728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.028 [2024-11-19 11:25:39.076735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.028 qpair failed and we were unable to recover it. 00:31:31.028 [2024-11-19 11:25:39.076888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.028 [2024-11-19 11:25:39.076896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.028 qpair failed and we were unable to recover it. 00:31:31.028 [2024-11-19 11:25:39.077193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.028 [2024-11-19 11:25:39.077201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.028 qpair failed and we were unable to recover it. 00:31:31.028 [2024-11-19 11:25:39.077527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.028 [2024-11-19 11:25:39.077534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.028 qpair failed and we were unable to recover it. 00:31:31.028 [2024-11-19 11:25:39.077844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.028 [2024-11-19 11:25:39.077851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.028 qpair failed and we were unable to recover it. 00:31:31.029 [2024-11-19 11:25:39.078164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.029 [2024-11-19 11:25:39.078172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.029 qpair failed and we were unable to recover it. 00:31:31.029 [2024-11-19 11:25:39.078381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.029 [2024-11-19 11:25:39.078388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.029 qpair failed and we were unable to recover it. 00:31:31.029 [2024-11-19 11:25:39.078596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.029 [2024-11-19 11:25:39.078609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.029 qpair failed and we were unable to recover it. 00:31:31.029 [2024-11-19 11:25:39.078914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.029 [2024-11-19 11:25:39.078921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.029 qpair failed and we were unable to recover it. 00:31:31.029 [2024-11-19 11:25:39.079225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.029 [2024-11-19 11:25:39.079231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.029 qpair failed and we were unable to recover it. 00:31:31.030 [2024-11-19 11:25:39.079556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.030 [2024-11-19 11:25:39.079563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.030 qpair failed and we were unable to recover it. 00:31:31.030 [2024-11-19 11:25:39.079872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.030 [2024-11-19 11:25:39.079879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.030 qpair failed and we were unable to recover it. 00:31:31.030 [2024-11-19 11:25:39.080195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.030 [2024-11-19 11:25:39.080202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.030 qpair failed and we were unable to recover it. 00:31:31.030 [2024-11-19 11:25:39.080601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.030 [2024-11-19 11:25:39.080608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.030 qpair failed and we were unable to recover it. 00:31:31.030 [2024-11-19 11:25:39.080909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.030 [2024-11-19 11:25:39.080916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.030 qpair failed and we were unable to recover it. 00:31:31.031 [2024-11-19 11:25:39.081223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.031 [2024-11-19 11:25:39.081230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.031 qpair failed and we were unable to recover it. 00:31:31.031 [2024-11-19 11:25:39.081523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.031 [2024-11-19 11:25:39.081533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.031 qpair failed and we were unable to recover it. 00:31:31.031 [2024-11-19 11:25:39.081707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.031 [2024-11-19 11:25:39.081714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.031 qpair failed and we were unable to recover it. 00:31:31.031 [2024-11-19 11:25:39.082028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.031 [2024-11-19 11:25:39.082035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.031 qpair failed and we were unable to recover it. 00:31:31.032 [2024-11-19 11:25:39.082332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.032 [2024-11-19 11:25:39.082339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.032 qpair failed and we were unable to recover it. 00:31:31.032 [2024-11-19 11:25:39.082665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.032 [2024-11-19 11:25:39.082672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.032 qpair failed and we were unable to recover it. 00:31:31.032 [2024-11-19 11:25:39.082891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.032 [2024-11-19 11:25:39.082898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.032 qpair failed and we were unable to recover it. 00:31:31.032 [2024-11-19 11:25:39.083065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.033 [2024-11-19 11:25:39.083072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.033 qpair failed and we were unable to recover it. 00:31:31.033 [2024-11-19 11:25:39.083365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.033 [2024-11-19 11:25:39.083371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.033 qpair failed and we were unable to recover it. 00:31:31.033 [2024-11-19 11:25:39.083745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.033 [2024-11-19 11:25:39.083751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.033 qpair failed and we were unable to recover it. 00:31:31.033 [2024-11-19 11:25:39.083988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.033 [2024-11-19 11:25:39.083996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.033 qpair failed and we were unable to recover it. 00:31:31.033 [2024-11-19 11:25:39.084330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.033 [2024-11-19 11:25:39.084337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.033 qpair failed and we were unable to recover it. 00:31:31.033 [2024-11-19 11:25:39.084690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.033 [2024-11-19 11:25:39.084697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.033 qpair failed and we were unable to recover it. 00:31:31.034 [2024-11-19 11:25:39.085010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.034 [2024-11-19 11:25:39.085017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.034 qpair failed and we were unable to recover it. 00:31:31.034 [2024-11-19 11:25:39.085318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.034 [2024-11-19 11:25:39.085332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.034 qpair failed and we were unable to recover it. 00:31:31.034 [2024-11-19 11:25:39.085642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.034 [2024-11-19 11:25:39.085649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.034 qpair failed and we were unable to recover it. 00:31:31.034 [2024-11-19 11:25:39.085825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.034 [2024-11-19 11:25:39.085832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.034 qpair failed and we were unable to recover it. 00:31:31.034 [2024-11-19 11:25:39.086180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.034 [2024-11-19 11:25:39.086188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.034 qpair failed and we were unable to recover it. 00:31:31.034 [2024-11-19 11:25:39.086487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.034 [2024-11-19 11:25:39.086495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.034 qpair failed and we were unable to recover it. 00:31:31.034 [2024-11-19 11:25:39.086804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.034 [2024-11-19 11:25:39.086812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.035 qpair failed and we were unable to recover it. 00:31:31.035 [2024-11-19 11:25:39.087118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.035 [2024-11-19 11:25:39.087126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.035 qpair failed and we were unable to recover it. 00:31:31.035 [2024-11-19 11:25:39.087433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.035 [2024-11-19 11:25:39.087440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.035 qpair failed and we were unable to recover it. 00:31:31.035 [2024-11-19 11:25:39.087756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.035 [2024-11-19 11:25:39.087763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.035 qpair failed and we were unable to recover it. 00:31:31.035 [2024-11-19 11:25:39.088071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.035 [2024-11-19 11:25:39.088079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.035 qpair failed and we were unable to recover it. 00:31:31.035 [2024-11-19 11:25:39.088438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.035 [2024-11-19 11:25:39.088445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.035 qpair failed and we were unable to recover it. 00:31:31.035 [2024-11-19 11:25:39.088747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.035 [2024-11-19 11:25:39.088754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.035 qpair failed and we were unable to recover it. 00:31:31.035 [2024-11-19 11:25:39.089066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.035 [2024-11-19 11:25:39.089074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.035 qpair failed and we were unable to recover it. 00:31:31.035 [2024-11-19 11:25:39.089362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.036 [2024-11-19 11:25:39.089368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.036 qpair failed and we were unable to recover it. 00:31:31.036 [2024-11-19 11:25:39.089681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.036 [2024-11-19 11:25:39.089688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.036 qpair failed and we were unable to recover it. 00:31:31.036 [2024-11-19 11:25:39.089899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.036 [2024-11-19 11:25:39.089906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.036 qpair failed and we were unable to recover it. 00:31:31.036 [2024-11-19 11:25:39.090191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.036 [2024-11-19 11:25:39.090197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.036 qpair failed and we were unable to recover it. 00:31:31.036 [2024-11-19 11:25:39.090495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.036 [2024-11-19 11:25:39.090502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.036 qpair failed and we were unable to recover it. 00:31:31.036 [2024-11-19 11:25:39.090814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.036 [2024-11-19 11:25:39.090820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.036 qpair failed and we were unable to recover it. 00:31:31.036 [2024-11-19 11:25:39.091157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.036 [2024-11-19 11:25:39.091165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.036 qpair failed and we were unable to recover it. 00:31:31.036 [2024-11-19 11:25:39.091518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.036 [2024-11-19 11:25:39.091525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.037 qpair failed and we were unable to recover it. 00:31:31.037 [2024-11-19 11:25:39.091867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.037 [2024-11-19 11:25:39.091875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.037 qpair failed and we were unable to recover it. 00:31:31.037 [2024-11-19 11:25:39.092165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.037 [2024-11-19 11:25:39.092171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.037 qpair failed and we were unable to recover it. 00:31:31.037 [2024-11-19 11:25:39.092469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.037 [2024-11-19 11:25:39.092476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.037 qpair failed and we were unable to recover it. 00:31:31.037 [2024-11-19 11:25:39.092786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.037 [2024-11-19 11:25:39.092792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.037 qpair failed and we were unable to recover it. 00:31:31.037 [2024-11-19 11:25:39.093087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.037 [2024-11-19 11:25:39.093094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.037 qpair failed and we were unable to recover it. 00:31:31.038 [2024-11-19 11:25:39.093412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.038 [2024-11-19 11:25:39.093419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.038 qpair failed and we were unable to recover it. 00:31:31.038 [2024-11-19 11:25:39.093819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.038 [2024-11-19 11:25:39.093827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.038 qpair failed and we were unable to recover it. 00:31:31.038 [2024-11-19 11:25:39.094022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.038 [2024-11-19 11:25:39.094029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.038 qpair failed and we were unable to recover it. 00:31:31.038 [2024-11-19 11:25:39.094242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.038 [2024-11-19 11:25:39.094249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.038 qpair failed and we were unable to recover it. 00:31:31.038 [2024-11-19 11:25:39.094530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.038 [2024-11-19 11:25:39.094537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.038 qpair failed and we were unable to recover it. 00:31:31.038 [2024-11-19 11:25:39.094841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.038 [2024-11-19 11:25:39.094848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.038 qpair failed and we were unable to recover it. 00:31:31.038 [2024-11-19 11:25:39.095105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.038 [2024-11-19 11:25:39.095113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.038 qpair failed and we were unable to recover it. 00:31:31.038 [2024-11-19 11:25:39.095369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.039 [2024-11-19 11:25:39.095376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.039 qpair failed and we were unable to recover it. 00:31:31.039 [2024-11-19 11:25:39.095750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.039 [2024-11-19 11:25:39.095757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.039 qpair failed and we were unable to recover it. 00:31:31.039 [2024-11-19 11:25:39.096087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.039 [2024-11-19 11:25:39.096095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.039 qpair failed and we were unable to recover it. 00:31:31.039 [2024-11-19 11:25:39.096266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.039 [2024-11-19 11:25:39.096273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.039 qpair failed and we were unable to recover it. 00:31:31.039 [2024-11-19 11:25:39.096592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.039 [2024-11-19 11:25:39.096599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.039 qpair failed and we were unable to recover it. 00:31:31.039 [2024-11-19 11:25:39.096918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.039 [2024-11-19 11:25:39.096925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.039 qpair failed and we were unable to recover it. 00:31:31.039 [2024-11-19 11:25:39.097246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.039 [2024-11-19 11:25:39.097254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.039 qpair failed and we were unable to recover it. 00:31:31.039 [2024-11-19 11:25:39.097436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.039 [2024-11-19 11:25:39.097444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.039 qpair failed and we were unable to recover it. 00:31:31.040 [2024-11-19 11:25:39.097728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.040 [2024-11-19 11:25:39.097736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.040 qpair failed and we were unable to recover it. 00:31:31.040 [2024-11-19 11:25:39.098080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.040 [2024-11-19 11:25:39.098087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.040 qpair failed and we were unable to recover it. 00:31:31.040 [2024-11-19 11:25:39.098388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.040 [2024-11-19 11:25:39.098402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.040 qpair failed and we were unable to recover it. 00:31:31.041 [2024-11-19 11:25:39.098765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.041 [2024-11-19 11:25:39.098772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.041 qpair failed and we were unable to recover it. 00:31:31.041 [2024-11-19 11:25:39.099060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.041 [2024-11-19 11:25:39.099068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.041 qpair failed and we were unable to recover it. 00:31:31.041 [2024-11-19 11:25:39.099256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.041 [2024-11-19 11:25:39.099263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.041 qpair failed and we were unable to recover it. 00:31:31.041 [2024-11-19 11:25:39.099549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.041 [2024-11-19 11:25:39.099556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.041 qpair failed and we were unable to recover it. 00:31:31.041 [2024-11-19 11:25:39.099741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.041 [2024-11-19 11:25:39.099748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.041 qpair failed and we were unable to recover it. 00:31:31.041 [2024-11-19 11:25:39.100132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.041 [2024-11-19 11:25:39.100139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.041 qpair failed and we were unable to recover it. 00:31:31.041 [2024-11-19 11:25:39.100436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.042 [2024-11-19 11:25:39.100443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.042 qpair failed and we were unable to recover it. 00:31:31.042 [2024-11-19 11:25:39.100764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.042 [2024-11-19 11:25:39.100771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.042 qpair failed and we were unable to recover it. 00:31:31.042 [2024-11-19 11:25:39.101144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.042 [2024-11-19 11:25:39.101151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.042 qpair failed and we were unable to recover it. 00:31:31.042 [2024-11-19 11:25:39.101479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.042 [2024-11-19 11:25:39.101486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.042 qpair failed and we were unable to recover it. 00:31:31.042 [2024-11-19 11:25:39.101669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.042 [2024-11-19 11:25:39.101678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.042 qpair failed and we were unable to recover it. 00:31:31.042 [2024-11-19 11:25:39.101858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.042 [2024-11-19 11:25:39.101868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.042 qpair failed and we were unable to recover it. 00:31:31.042 [2024-11-19 11:25:39.102054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.043 [2024-11-19 11:25:39.102061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.043 qpair failed and we were unable to recover it. 00:31:31.043 [2024-11-19 11:25:39.102428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.043 [2024-11-19 11:25:39.102436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.043 qpair failed and we were unable to recover it. 00:31:31.043 [2024-11-19 11:25:39.102773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.043 [2024-11-19 11:25:39.102779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.043 qpair failed and we were unable to recover it. 00:31:31.043 [2024-11-19 11:25:39.103073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.043 [2024-11-19 11:25:39.103080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.043 qpair failed and we were unable to recover it. 00:31:31.043 [2024-11-19 11:25:39.103416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.043 [2024-11-19 11:25:39.103422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.043 qpair failed and we were unable to recover it. 00:31:31.043 [2024-11-19 11:25:39.103695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.043 [2024-11-19 11:25:39.103702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.043 qpair failed and we were unable to recover it. 00:31:31.043 [2024-11-19 11:25:39.104029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.043 [2024-11-19 11:25:39.104036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.043 qpair failed and we were unable to recover it. 00:31:31.043 [2024-11-19 11:25:39.104331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.104338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.104639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.104646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.104921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.104928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.105179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.105186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.105495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.105504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.105799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.105806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.106121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.106128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.106434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.106441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.106812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.044 [2024-11-19 11:25:39.106820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.044 qpair failed and we were unable to recover it. 00:31:31.044 [2024-11-19 11:25:39.107088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.045 [2024-11-19 11:25:39.107096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.045 qpair failed and we were unable to recover it. 00:31:31.045 [2024-11-19 11:25:39.107357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.045 [2024-11-19 11:25:39.107364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.045 qpair failed and we were unable to recover it. 00:31:31.045 [2024-11-19 11:25:39.107547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.045 [2024-11-19 11:25:39.107555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.045 qpair failed and we were unable to recover it. 00:31:31.045 [2024-11-19 11:25:39.107857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.045 [2024-11-19 11:25:39.107868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.045 qpair failed and we were unable to recover it. 00:31:31.045 [2024-11-19 11:25:39.108180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.045 [2024-11-19 11:25:39.108187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.045 qpair failed and we were unable to recover it. 00:31:31.045 [2024-11-19 11:25:39.108494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.047 [2024-11-19 11:25:39.108500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.047 qpair failed and we were unable to recover it. 00:31:31.047 [2024-11-19 11:25:39.108585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.047 [2024-11-19 11:25:39.108591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.047 qpair failed and we were unable to recover it. 00:31:31.047 [2024-11-19 11:25:39.108772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.047 [2024-11-19 11:25:39.108779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.047 qpair failed and we were unable to recover it. 00:31:31.047 [2024-11-19 11:25:39.109119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.047 [2024-11-19 11:25:39.109126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.047 qpair failed and we were unable to recover it. 00:31:31.047 [2024-11-19 11:25:39.109421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.047 [2024-11-19 11:25:39.109435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.047 qpair failed and we were unable to recover it. 00:31:31.047 [2024-11-19 11:25:39.109757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.047 [2024-11-19 11:25:39.109764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.047 qpair failed and we were unable to recover it. 00:31:31.047 [2024-11-19 11:25:39.110059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.047 [2024-11-19 11:25:39.110066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.047 qpair failed and we were unable to recover it. 00:31:31.047 [2024-11-19 11:25:39.110397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.047 [2024-11-19 11:25:39.110403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.110716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.048 [2024-11-19 11:25:39.110723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.110878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.048 [2024-11-19 11:25:39.110886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.111169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.048 [2024-11-19 11:25:39.111176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.111371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.048 [2024-11-19 11:25:39.111377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.111420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.048 [2024-11-19 11:25:39.111427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.111723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.048 [2024-11-19 11:25:39.111731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.112039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.048 [2024-11-19 11:25:39.112046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.112339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.048 [2024-11-19 11:25:39.112346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.048 qpair failed and we were unable to recover it. 00:31:31.048 [2024-11-19 11:25:39.112671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.112677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.113001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.113009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.113220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.113226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.113421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.113428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.113651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.113658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.113941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.113948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.114289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.114296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.114686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.114693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.114891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.114898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.049 qpair failed and we were unable to recover it. 00:31:31.049 [2024-11-19 11:25:39.115113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.049 [2024-11-19 11:25:39.115120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.050 qpair failed and we were unable to recover it. 00:31:31.050 [2024-11-19 11:25:39.115427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.050 [2024-11-19 11:25:39.115434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.050 qpair failed and we were unable to recover it. 00:31:31.050 [2024-11-19 11:25:39.115713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.050 [2024-11-19 11:25:39.115725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.050 qpair failed and we were unable to recover it. 00:31:31.050 [2024-11-19 11:25:39.116024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.050 [2024-11-19 11:25:39.116031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.050 qpair failed and we were unable to recover it. 00:31:31.050 [2024-11-19 11:25:39.116423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.050 [2024-11-19 11:25:39.116430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.050 qpair failed and we were unable to recover it. 00:31:31.050 [2024-11-19 11:25:39.116733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.050 [2024-11-19 11:25:39.116741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.050 qpair failed and we were unable to recover it. 00:31:31.050 [2024-11-19 11:25:39.117051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.050 [2024-11-19 11:25:39.117058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.050 qpair failed and we were unable to recover it. 00:31:31.050 [2024-11-19 11:25:39.117346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.050 [2024-11-19 11:25:39.117353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.050 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.117660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.117666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.117981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.117988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.118296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.118303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.118621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.118628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.118933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.118940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.119246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.119253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.119534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.119542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.119832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.119839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.120011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.120019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.120339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.120345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.120529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.120536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.120807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.120814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.121129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.121136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.121464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.121471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.121663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.121671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.122008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.122016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.122342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.122349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.122653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.122660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.122976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.122983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.123311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.123318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.123635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.123641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.123957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.051 [2024-11-19 11:25:39.123964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.051 qpair failed and we were unable to recover it. 00:31:31.051 [2024-11-19 11:25:39.124270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.124277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.124575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.124583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.124888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.124896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.125243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.125250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.125560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.125567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.125864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.125872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.126056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.126062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.126319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.126325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.126636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.126643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.126959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.126966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.127274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.127281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.127638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.127644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.127834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.127840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.128129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.128137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.128444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.128451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.128653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.128663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.129008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.129015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.129404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.129410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.129731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.129738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.130112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.130119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.130408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.130416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.130702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.130708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.131021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.131028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.131257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.131265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.131568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.131575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.131879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.131886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.132195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.132202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.132378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.132384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.132540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.132547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.132849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.132856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.133155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.133163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.133471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.133478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.133787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.133794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.134106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.134113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.134390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.134397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.134608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.134616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.134913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.134920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.135196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.135203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.135514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.135521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.135559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.135566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.135849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.135856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.136163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.136169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.136491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.136499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.136803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.136810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.137123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.137130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.052 [2024-11-19 11:25:39.137452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.052 [2024-11-19 11:25:39.137459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.052 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.137631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.137637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.137922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.137929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.138049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.138056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.138316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.138323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.138633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.138640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.138949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.138956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.139252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.139259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.139593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.139600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.139890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.139898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.140184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.140191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.140501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.140509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.140827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.140835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.141023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.141031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.141321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.141328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.141637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.141645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.141856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.141866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.142198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.142204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.142401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.142407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.142617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.142625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.142856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.142869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.143061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.143067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.143387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.143394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.143715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.143721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.144036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.144044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.144363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.144370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.144640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.144647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.144974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.144981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.145299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.145306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.145616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.145623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.145923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.145930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.146301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.146308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.146690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.146696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.146953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.146960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.147292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.147299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.147596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.147603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.147868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.147876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.148066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.148075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.148383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.148389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.148682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.148690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.148999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.149006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.149314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.149321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.149535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.149541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.149832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.149838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.150136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.053 [2024-11-19 11:25:39.150143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.053 qpair failed and we were unable to recover it. 00:31:31.053 [2024-11-19 11:25:39.150344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.150351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.150692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.150699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.150899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.150906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.151260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.151266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.151474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.151481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.151822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.151829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.152121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.152129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.152442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.152448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.152639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.152645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.152921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.152928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.153248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.153256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.153645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.153653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.153965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.153972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.154128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.154136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.154420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.154428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.154756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.154763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.154923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.154931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.155233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.155240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.155551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.155557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.155869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.155877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.156162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.156169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.156476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.156483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.156793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.156799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.157115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.157122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.054 [2024-11-19 11:25:39.157310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.054 [2024-11-19 11:25:39.157317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.054 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.157484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.157492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.157788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.157795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.158091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.158098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.158405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.158411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.158697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.158704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.159025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.159032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.159343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.159350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.159665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.159673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.159973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.159981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.160181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.160188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.160449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.160455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.160785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.160792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.161088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.161095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.161397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.161404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.161587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.161594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.161914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.161921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.162106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.162113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.162393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.162400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.162715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.162721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.163048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.163055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.163354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.163361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.163646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.163654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.163955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.163962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.164169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.164175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.164499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.164505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.164803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.164809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.165100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.165107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.165392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.165399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.165707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.165714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.166008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.166015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.166340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.166347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.166506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.166513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.166778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.166785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.167077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.167084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.167390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.167397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.167704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.167711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.168022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.168029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.168237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.168245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.168549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.168555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.168844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.055 [2024-11-19 11:25:39.168851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.055 qpair failed and we were unable to recover it. 00:31:31.055 [2024-11-19 11:25:39.169237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.169244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.169542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.169549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.169867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.169875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.170152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.170159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.170346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.170353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.170676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.170683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.171025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.171033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.171354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.171362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.171657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.171665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.172010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.172017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.172327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.172334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.172607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.172613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.172910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.172917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.173082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.173089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.173402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.173409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.173632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.173638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.173899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.173906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.174082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.174090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.174382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.174389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.174602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.174609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.174832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.174840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.175113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.175120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.175402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.175415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.175725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.175731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.176003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.176010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.176298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.176305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.176607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.176614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.176924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.176931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.177214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.177221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.177527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.177535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.177745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.177752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.178066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.178073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.178377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.178385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.178682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.056 [2024-11-19 11:25:39.178690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.056 qpair failed and we were unable to recover it. 00:31:31.056 [2024-11-19 11:25:39.179006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.179013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.179310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.179318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.179644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.179651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.179809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.179815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.180109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.180116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.180437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.180443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.180778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.180784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.180949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.180957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.181333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.181340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.181643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.181650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.181959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.181966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.182255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.182262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.182595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.182602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.182913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.182922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.183169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.183176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.183507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.183513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.183822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.183830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.184000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.184009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.184315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.184322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.184627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.184634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.184811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.184818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.185110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.185117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.185451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.185458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.185750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.185757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.186035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.186042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.186412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.186420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.186730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.186738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.187054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.187061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.187359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.187366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.187685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.187693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.187970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.187977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.188295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.188302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.188617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.188623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.188981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.188988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.189311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.189318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.189491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.189499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.189808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.189815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.190218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.190225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.190508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.190516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.190808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.190814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.190991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.190999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.191252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.191259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.191586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.191592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.191885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.191893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.192211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.192218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.192530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.192536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.192848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.192855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.193040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.193048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.193422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.193428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.193734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.193741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.194056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.194063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.194361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.194368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.194682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.194690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.194860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.194877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.195185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.195192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.195455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.057 [2024-11-19 11:25:39.195461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.057 qpair failed and we were unable to recover it. 00:31:31.057 [2024-11-19 11:25:39.195780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.195787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.196171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.196178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.196375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.196382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.196728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.196734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.197024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.197032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.197248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.197255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.197481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.197487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.197770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.197777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.198105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.198112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.198315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.198321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.198590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.198597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.198806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.198813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.199158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.199165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.199351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.199358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.199719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.199725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.200094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.200101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.200375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.200382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.200712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.200718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.201007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.201014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.201323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.201329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.201647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.201654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.201929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.201936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.202240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.202247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.202541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.202548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.202849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.202857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.203184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.203192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.203398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.203405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.203591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.203599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.203875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.203883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.204193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.204200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.204491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.204497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.204680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.204686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.204965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.204972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.205292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.205298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.205595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.205603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.205931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.205938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.206250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.206257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.206565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.206573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.206888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.206895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.207198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.207205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.207516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.207523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.207833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.207839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.208148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.208155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.208445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.208451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.208764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.208771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.209129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.209136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.209423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.209431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.209733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.209740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.209976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.209984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.210323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.210330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.210647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.210654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.210944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.210952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.211296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.211303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.211579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.211594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.211773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.211780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.212067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.212074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.212423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.212429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.212741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.212748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.213087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.213094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.213385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.213392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.213681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.213688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.213983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.213991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.214304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.214310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.214600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.214607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.214829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.058 [2024-11-19 11:25:39.214836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.058 qpair failed and we were unable to recover it. 00:31:31.058 [2024-11-19 11:25:39.215125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.215132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.215439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.215446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.215741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.215748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.216037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.216045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.216365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.216373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.216686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.216693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.216999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.217005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.217285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.217291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.217619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.217627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.217832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.217839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.218232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.218239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.218560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.218567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.218882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.218892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.219204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.219211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.219508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.219515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.219731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.219738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.220003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.220010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.220366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.220373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.220656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.220663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.220726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.220733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.221009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.221016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.221339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.221346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.221646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.221652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.221966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.221973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.222145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.222152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.222425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.222432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.222622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.222629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.223014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.223021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.223208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.223215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.223528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.223534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.223732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.223738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.224038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.224045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.224366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.224373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.224686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.224692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.225004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.225011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.225308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.225314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.225622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.225629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.225935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.225942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.226257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.226263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.226558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.226565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.226874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.226882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.226921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.226928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.227300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.227307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.227496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.227503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.227838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.227845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.228141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.228149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.228411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.228419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.228609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.228617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.228884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.228892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.229191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.229198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.229510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.229517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.229668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.229676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.229960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.229969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.230248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.230254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.230463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.230470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.230748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.230755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.231065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.231072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.231362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.231369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.231674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.231680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.231954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.231961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.232167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.232174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.232494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.232500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.232812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.232819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.233129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.233136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.233444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.233451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.233758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.233764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.234052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.234065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.234383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.234391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.234686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.234692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.235006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.235014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.235340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.059 [2024-11-19 11:25:39.235348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.059 qpair failed and we were unable to recover it. 00:31:31.059 [2024-11-19 11:25:39.235657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.235665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.235988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.235996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.236343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.236350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.236643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.236651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.236963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.236970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.237264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.237271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.237559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.237566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.237788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.237794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.238067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.238074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.238383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.238390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.238719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.238725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.239035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.239043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.239348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.239356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.239645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.239652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.239951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.239958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.240181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.240188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.240575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.240581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.240873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.240880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.241162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.241170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.241475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.241483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.241798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.241805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.242107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.242124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.242425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.242431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.242466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.242473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.242827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.242834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.243132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.243139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.243430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.243437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.243772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.243779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.244078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.244085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.244443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.244450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.244659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.244666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.244974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.244981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.245180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.245187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.245483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.245490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.245656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.245664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.245834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.245842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.246112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.246119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.246464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.246471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.246770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.246777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.247070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.247078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.247384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.247390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.247695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.247702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.247926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.247933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.248232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.248239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.248526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.248533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.248667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.248675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.249023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.249030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.249342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.249349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.249656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.249663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.249972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.249979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.250206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.250212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.250493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.250500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.250815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.250822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.251131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.251138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.251435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.251442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.251760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.251768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.252081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.252088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.252470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.252477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.252768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.252776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.253085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.253093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.253401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.253409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.253724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.253734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.254044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.254051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.254363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.254370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.254682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.254688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.254889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.254895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.255242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.255248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.255541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.255548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.255855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.255864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.256149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.060 [2024-11-19 11:25:39.256156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.060 qpair failed and we were unable to recover it. 00:31:31.060 [2024-11-19 11:25:39.256312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.256320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.256587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.256593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.256901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.256908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.257226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.257233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.257539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.257547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.257866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.257873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.258236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.258243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.258584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.258591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.258895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.258902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.259218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.259225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.259546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.259552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.259872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.259879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.260203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.260210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.260527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.260534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.260854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.260860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.261149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.261156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.261538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.261546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.261845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.261853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.262049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.262057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.262335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.262342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.262625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.262631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.262945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.262952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.263259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.263266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.263582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.263589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.263886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.263893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.264217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.264223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.264537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.264544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.264853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.264860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.265157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.265164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.265486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.265493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.265689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.265695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.265978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.265988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.266328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.266334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.266617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.266625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.266937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.266944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.267159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.267166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.267335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.267343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.267643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.267650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.267942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.267950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.268258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.268265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.268557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.268564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.268875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.268882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.269200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.269207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.269516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.269522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.269716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.269722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.270090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.270097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.270424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.270431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.270738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.270745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.271039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.271046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.271371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.271379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.271729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.271736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.272060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.272067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.272275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.272282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.272582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.272589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.272927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.272934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.273261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.273268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.273559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.273565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.273916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.273923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.274208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.274216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.061 [2024-11-19 11:25:39.274559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.061 [2024-11-19 11:25:39.274566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.061 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.274858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.274871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.275156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.275162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.275476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.275483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.275652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.275660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.275978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.275985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.276266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.276273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.276616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.276623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.276809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.276816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.277113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.277120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.277438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.277444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.277744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.277750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.278077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.278086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.278376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.278383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.278706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.278713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.278952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.278959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.279284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.279291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.279660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.279667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.279950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.279958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.280179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.280186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.280374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.280382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.280695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.280703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.280970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.280977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.281278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.281285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.281607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.281614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.281926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.281933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.282144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.282152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.282515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.282522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.282839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.282846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.283059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.283066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.283425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.283432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.283747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.283754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.284048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.284055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.284349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.284357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.284529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.284536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.284884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.284891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.285210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.285217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.285514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.285520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.285844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.285851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.286163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.286170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.286366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.286372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.286560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.286567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.286947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.286955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.287130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.287138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.287360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.287366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.287658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.287665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.287986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.287993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.288338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.288344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.288668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.288675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.288880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.288888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.289212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.289219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.289414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.289422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.289815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.289822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.290019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.290026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.290337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.290344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.290528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.290535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.290909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.290916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.291225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.291232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.291385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.291393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.291720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.291726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.292035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.292043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.292353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.292360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.292562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.292568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.292856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.292865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.293196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.293203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.293495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.293502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.293780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.293787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.294081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.294087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.294395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.062 [2024-11-19 11:25:39.294402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.062 qpair failed and we were unable to recover it. 00:31:31.062 [2024-11-19 11:25:39.294726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.294734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.295068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.295075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.295365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.295372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.295558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.295565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.295872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.295880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.296159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.296165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.296367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.296374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.296553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.296561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.296868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.296875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.297171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.297178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.297497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.297506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.297813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.297820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.298157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.298164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.298476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.298484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.298834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.298841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.299042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.299049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.299264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.299272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.299571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.299578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.299795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.299802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.300153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.300160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.300457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.300464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.300784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.300790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.301091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.301099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.301378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.301385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.301727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.301734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.301934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.301940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.302274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.302281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.302607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.302614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.302806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.302813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.303142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.303148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.303452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.303459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.303768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.303775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.304133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.304140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.304425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.304432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.304740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.304747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.305075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.305082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.305406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.305413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.305606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.305613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.305791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.305799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.306074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.306082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.306425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.306432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.306745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.306753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.307085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.307092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.307404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.307412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.307601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.307610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.307879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.307887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.308197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.308204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.308386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.308393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.308578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.308585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.308919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.308926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.309138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.309147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.309303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.309310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.309600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.309607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.309910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.309917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.310182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.310189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.310496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.310511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.310826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.310832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.311115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.311122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.311437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.311443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.311754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.311761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.311931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.311939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.312276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.312283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.312610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.063 [2024-11-19 11:25:39.312616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.063 qpair failed and we were unable to recover it. 00:31:31.063 [2024-11-19 11:25:39.312930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.312938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.313138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.313146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.313421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.313428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.313725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.313732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.314025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.314032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.314336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.314344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.314522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.314529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.314849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.314856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.315154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.315160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.315357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.315363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.315641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.315648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.315964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.315971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.316351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.316358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.316671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.316678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.316846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.316853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.317163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.317170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.317461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.317468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.064 [2024-11-19 11:25:39.317778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.064 [2024-11-19 11:25:39.317785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.064 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.318159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.318168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.318349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.318357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.318624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.318631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.318956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.318963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.319260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.319268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.319543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.319549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.319834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.319842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.320134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.320141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.320469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.320477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.320770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.320779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.321066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.321073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.321392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.321399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.321711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.321719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.321789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.321796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.322094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.322101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.322274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.322282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.322590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.322598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.322896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.322904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.323192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.323199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.323494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.323507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.323763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.323770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.324081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.324089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.324256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.324263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.324471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.324478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.352 qpair failed and we were unable to recover it. 00:31:31.352 [2024-11-19 11:25:39.324740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.352 [2024-11-19 11:25:39.324748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.325057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.325064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.325384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.325392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.325700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.325706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.326022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.326030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.326354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.326361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.326745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.326752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.327035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.327043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.327392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.327398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.327588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.327595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.327845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.327853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.328137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.328144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.328360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.328367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.328617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.328625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.328935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.328942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.329158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.329165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.329473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.329479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.329823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.329830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.329883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.329890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.330192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.330198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.330515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.330522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.330851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.330859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.331162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.331170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.331492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.331500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.331813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.331820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.332132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.332140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.332336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.332343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.332711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.332719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.332915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.332923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.333113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.333119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.333511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.333518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.333715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.333721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.333925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.333933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.334175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.334182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.334489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.334496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.334786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.334794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.335119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.335126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.335330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.353 [2024-11-19 11:25:39.335337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.353 qpair failed and we were unable to recover it. 00:31:31.353 [2024-11-19 11:25:39.335753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.335759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.336056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.336063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.336365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.336372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.336644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.336651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.337026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.337033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.337350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.337366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.337728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.337734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.338028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.338036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.338334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.338340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.338662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.338668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.338982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.338990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.339202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.339209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.339359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.339367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.339679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.339686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.339988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.339996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.340365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.340372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.340676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.340682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.341066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.341073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.341428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.341435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.341658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.341665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.341978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.341985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.342273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.342280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.342603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.342610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.342762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.342768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.342957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.342964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.343301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.343308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.343593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.343600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.343972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.343981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.344296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.344310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.344491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.344497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.344832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.344839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.345156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.345163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.345413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.345419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.345751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.345758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.346077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.346085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.354 qpair failed and we were unable to recover it. 00:31:31.354 [2024-11-19 11:25:39.346411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.354 [2024-11-19 11:25:39.346419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.346736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.346744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.347049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.347056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.347267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.347274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.347611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.347617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.347925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.347932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.348230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.348237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.348524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.348540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.348712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.348719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.348992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.348999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.349172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.349187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.349269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.349276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.349423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.349430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.349757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.349763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.350047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.350054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.350229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.350237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.350566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.350573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.350725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.350732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.351008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.351015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.351337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.351343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.351644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.351651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.351973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.351979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.352291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.352298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.352593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.352600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.352761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.352769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.353082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.353089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.353400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.353406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.353714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.353721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.353928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.353935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.354174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.354181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.354486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.354493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.354820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.354828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.355137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.355145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.355484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.355491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.355814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.355822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.356127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.355 [2024-11-19 11:25:39.356134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.355 qpair failed and we were unable to recover it. 00:31:31.355 [2024-11-19 11:25:39.356368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.356375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.356707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.356715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.357022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.357029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.357321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.357328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.357508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.357515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.357828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.357835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.358162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.358488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.358495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.358883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.358890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.359197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.359204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.359244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.359251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.359550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.359556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.359942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.359948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.360272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.360620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.360627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.360891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.360898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.361202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.361209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.361402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.361409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.361764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.361771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.362054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.362061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.362230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.362237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.362532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.362539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.362849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.362856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.363064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.363071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.363369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.363376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.363692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.363699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.364016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.364023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.364200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.364208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.364506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.364513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.364838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.364846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.365146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.365153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.365511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.365518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.365807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.365815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.366191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.366198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.366360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.366368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.356 [2024-11-19 11:25:39.366554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.356 [2024-11-19 11:25:39.366562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.356 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.366872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.366882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.367201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.367208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.367510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.367517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.367675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.367682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.367988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.367995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.368367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.368374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.368659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.368665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.368852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.368858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.369146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.369153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.369481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.369487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.369781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.369788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.370069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.370076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.370359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.370367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.370657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.370664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.370978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.370985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.371298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.371305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.371612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.371618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.371909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.371916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.372298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.372304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.372597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.372610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.372924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.372931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.373223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.373230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.373566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.373573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.373873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.373887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.374170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.374177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.374473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.374480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.374792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.374799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.375106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.375114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.375422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.375430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.375552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.375560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.375913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.375920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.376215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.376222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.376545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.376551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.357 qpair failed and we were unable to recover it. 00:31:31.357 [2024-11-19 11:25:39.376889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.357 [2024-11-19 11:25:39.376896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.377210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.377217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.377525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.377532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.377824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.377831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.378140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.378147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.378443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.378451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.378732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.378739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.379062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.379070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.379360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.379366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.379676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.379682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.379991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.379999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.380203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.380210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.380571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.380578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.380884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.380891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.381184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.381191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.381375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.381382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.381687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.381694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.381856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.381867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.382162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.382168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.382462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.382470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.382816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.382823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.383130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.383444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.383451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.383760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.383767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.384061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.384069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.384389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.384395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.384706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.384713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.385023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.385030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.385195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.385202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.385542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.385548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.385797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.385803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.386120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.386128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.386423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.386430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.386725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.386731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.387018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.387025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.387340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.387346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.387632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.387639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.387964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.358 [2024-11-19 11:25:39.387971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.358 qpair failed and we were unable to recover it. 00:31:31.358 [2024-11-19 11:25:39.388185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.388192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.388368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.388375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.388679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.388686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.388999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.389006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.389335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.389341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.389653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.389660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.389858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.389867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.390155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.390161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.390470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.390476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.390790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.390798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.391099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.391107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.391387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.391394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.391682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.391690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.391847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.391855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.392166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.392173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.392479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.392486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.392689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.392696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.393015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.393022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.393314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.393321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.393482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.393489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.393799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.393805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.394103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.394110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.394402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.394408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.394721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.394727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.395037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.395044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.395345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.395650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.395657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.395967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.395974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.396268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.396275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.396592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.396599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.396758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.396765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.397115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.397122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.397342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.397349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.397565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.397572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.397785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.397791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.398081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.398089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.398275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.359 [2024-11-19 11:25:39.398283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.359 qpair failed and we were unable to recover it. 00:31:31.359 [2024-11-19 11:25:39.398609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.398616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.398925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.398932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.399242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.399249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.399558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.399564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.399874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.399881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.400210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.400216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.400524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.400530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.400850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.400856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.401073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.401079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.401285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.401292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.401669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.401676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.401982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.401989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.402302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.402310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.402514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.402521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.402853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.402860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.403203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.403211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.403521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.403527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.403721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.403728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.403950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.403958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.404102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.404109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.404401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.404408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.404589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.404596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.404797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.404804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.405132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.405139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.405423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.405430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.405761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.405769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.406083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.406090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.406281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.406288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.406630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.406638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.406932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.406939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.407262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.407269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.407466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.407474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.407860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.407870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.408160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.408173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.408369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.408376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.408642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.408649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.408866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.408874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.409078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.360 [2024-11-19 11:25:39.409085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.360 qpair failed and we were unable to recover it. 00:31:31.360 [2024-11-19 11:25:39.409386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.409393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.409666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.409673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.409992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.409999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.410301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.410307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.410627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.410633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.410944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.410951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.411310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.411317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.411615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.411621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.411937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.411945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.412258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.412265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.412580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.412586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.412885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.412892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.413213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.413219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.413413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.413420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.413738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.413747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.414040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.414048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.414260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.414267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.414578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.414586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.414897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.414905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.415205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.415211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.415522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.415529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.415735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.415741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.416056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.416063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.416362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.416368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.416679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.416686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.417000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.417007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.417392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.417399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.417698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.417705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.418027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.418034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.418335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.418342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.418632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.418639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.418925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.418932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.361 [2024-11-19 11:25:39.419150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.361 [2024-11-19 11:25:39.419157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.361 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.419447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.419454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.419772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.419779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.420079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.420087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.420394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.420401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.420711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.420717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.420900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.420906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.421102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.421109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.421409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.421415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.421721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.421729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.422044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.422051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.422354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.422362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.422663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.422671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.422887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.422894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.423204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.423210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.423505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.423512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.423823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.423830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.424112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.424120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.424348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.424355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.424654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.424662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.424841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.424848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.425073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.425080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.425395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.425403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.425751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.425757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.426028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.426036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.426240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.426247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.426598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.426605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.426961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.426968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.427267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.427274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.427582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.427589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.427876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.427883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.428207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.428214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.428495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.428502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.428700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.428708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.428980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.428988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.429301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.429308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.429590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.429598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.429906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.362 [2024-11-19 11:25:39.429915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.362 qpair failed and we were unable to recover it. 00:31:31.362 [2024-11-19 11:25:39.430242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.430249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.430542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.430549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.430833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.430840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.431165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.431172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.431467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.431475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.431768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.431775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.432080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.432088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.432330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.432336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.432672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.432679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.432973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.432981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.433144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.433151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.433421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.433428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.433739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.433746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.433943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.433950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.434190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.434198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.434474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.434480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.434768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.434776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.434981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.434989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.435309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.435316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.435601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.435608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.435916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.435923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.436248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.436255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.436565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.436572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.436891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.436898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.437219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.437226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.437515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.437522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.437841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.437848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.438154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.438162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.438469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.438475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.438767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.438774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.439150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.439158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.439439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.439446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.439756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.439764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.440078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.440086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.440392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.440400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.440705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.440712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.440906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.440915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.363 [2024-11-19 11:25:39.441227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.363 [2024-11-19 11:25:39.441234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.363 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.441625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.441632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.441923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.441931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.442227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.442234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.442453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.442460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.442740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.442747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.443029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.443036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.443356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.443362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.443655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.443661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.443981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.443988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.444308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.444315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.444635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.444642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.444953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.444968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.445134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.445142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.445401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.445410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.445719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.445727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.446031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.446039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.446348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.446355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.446556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.446562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.446830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.446837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.447132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.447140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.447446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.447453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.447736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.447743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.448032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.448038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.448306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.448313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.448623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.448629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.448924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.448932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.449219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.449226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.449516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.449523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.449833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.449840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.450131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.450139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.450358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.450365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.450689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.450696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.451011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.451018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.451338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.451345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.451677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.451685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.451992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.452000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.452311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.452318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.364 qpair failed and we were unable to recover it. 00:31:31.364 [2024-11-19 11:25:39.452668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.364 [2024-11-19 11:25:39.452675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.452986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.452993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.453297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.453305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.453610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.453616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.453928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.453935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.454255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.454261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.454556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.454563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.454874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.454881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.455180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.455187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.455501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.455508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.455799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.455805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.456126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.456133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.456441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.456448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.456757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.456764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.457067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.457075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.457141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.457148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.457281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.457291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.457469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.457476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.457773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.457780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.458079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.458086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.458426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.458433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.458648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.458656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.459025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.459033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.459341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.459348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.459657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.459665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.459978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.459985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.460203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.460210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.460497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.460504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.460811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.460819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.461105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.461112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.461388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.461401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.461688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.461695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.461958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.461965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.462272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.462279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.462430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.462439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.462737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.462745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.463085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.463093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.463396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.365 [2024-11-19 11:25:39.463403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.365 qpair failed and we were unable to recover it. 00:31:31.365 [2024-11-19 11:25:39.463709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.463716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.464013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.464021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.464201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.464209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.464536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.464855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.464865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.465187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.465194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.465507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.465514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.465832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.465839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.466028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.466036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.466362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.466369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.466648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.466655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.466832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.466838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.467160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.467167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.467370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.467377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.467584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.467592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.467903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.467910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.468276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.468282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.468569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.468577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.468959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.468968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.469275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.469284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.469609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.469616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.469923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.469930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.470251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.470258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.470445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.470455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.470784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.470792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.470990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.470998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.471370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.471376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.471684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.471691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.472004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.472011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.472342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.472348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.472659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.472666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.472975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.472982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.473295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.473302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.366 qpair failed and we were unable to recover it. 00:31:31.366 [2024-11-19 11:25:39.473635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.366 [2024-11-19 11:25:39.473641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.473955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.473962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.474298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.474307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.474620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.474627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.474920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.474927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.475214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.475222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.475534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.475541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.475859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.475869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.476163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.476170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.476364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.476371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.476638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.476645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.476853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.476864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.477176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.477183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.477491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.477497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.477763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.477771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.478105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.478113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.478411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.478418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.478737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.478744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.479038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.479045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.479252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.479259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.479408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.479417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.479567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.479574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.479900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.479907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.480208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.480215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.480599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.480606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.480828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.480838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.481156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.481164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.481485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.481492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.481560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.481567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.481887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.481897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.482232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.482239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.482626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.482634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.482944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.482952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.483242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.483248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.483565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.483572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.483729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.483736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.484022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.484030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.367 [2024-11-19 11:25:39.484367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.367 [2024-11-19 11:25:39.484375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.367 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.484682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.484690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.485001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.485008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.485209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.485216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.485434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.485441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.485757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.485764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.486037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.486044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.486407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.486415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.486716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.486724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.487033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.487040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.487327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.487334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.487658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.487665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.487965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.487972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.488294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.488300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.488585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.488593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.488903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.488910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.489201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.489208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.489514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.489520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.489831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.489838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.490140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.490147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.490363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.490370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.490677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.490684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.490974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.490981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.491301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.491307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.491601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.491609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.491919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.491927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.492097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.492104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.492404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.492411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.492739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.492748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.493052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.493060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.493373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.493381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.493665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.493673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.493980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.493987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.494166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.494173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.494428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.494436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.494720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.494728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.495033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.495040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.368 [2024-11-19 11:25:39.495357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.368 [2024-11-19 11:25:39.495364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.368 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.495664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.495671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.495995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.496002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.496288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.496296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.496601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.496608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.496920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.496927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.497235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.497242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.497544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.497551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.497758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.497765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.498031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.498038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.498226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.498235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.498593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.498599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.498884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.498891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.499197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.499204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.499481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.499488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.499675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.499683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.499984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.499991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.500282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.500289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.500597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.500603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.500900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.500907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.501214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.501221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.501528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.501534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.501747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.501754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.502064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.502072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.502391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.502398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.502741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.502747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.503044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.503051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.503381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.503388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.503678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.503685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.503990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.503997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.504284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.504291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.504452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.504461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.504729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.504735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.505035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.505042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.505362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.505368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.505556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.505563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.505884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.505892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.506078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.506085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.369 qpair failed and we were unable to recover it. 00:31:31.369 [2024-11-19 11:25:39.506410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.369 [2024-11-19 11:25:39.506416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.506714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.506721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.507038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.507045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.507334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.507342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.507527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.507534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.507716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.507723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.507957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.507964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.508301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.508308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.508598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.508606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.508927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.508934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.509216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.509224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.509539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.509546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.509854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.509865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.510170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.510177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.510478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.510485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.510801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.510808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.511128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.511135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.511437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.511444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.511765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.511772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.512082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.512089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.512273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.512280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.512592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.512598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.512805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.512812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.513148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.513155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.513466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.513473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.513769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.513776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.514157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.514164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.514413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.514420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.514736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.514743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.515033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.515041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.515350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.515357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.515706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.515712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.516019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.370 [2024-11-19 11:25:39.516026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.370 qpair failed and we were unable to recover it. 00:31:31.370 [2024-11-19 11:25:39.516198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.516207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.516510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.516516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.516826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.516833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.517142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.517149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.517515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.517522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.517823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.517831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.518137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.518144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.518457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.518464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 145970 Killed "${NVMF_APP[@]}" "$@" 00:31:31.371 [2024-11-19 11:25:39.518754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.518762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.518970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.518977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.519293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.519300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:31.371 [2024-11-19 11:25:39.519582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.519589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:31.371 [2024-11-19 11:25:39.519877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.519887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:31.371 [2024-11-19 11:25:39.520236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.520243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:31.371 [2024-11-19 11:25:39.520430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.520437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.371 [2024-11-19 11:25:39.520801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.520808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.521118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.521126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.521438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.521445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.521649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.521656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.521934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.521943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.522236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.522243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.522415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.522423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.522700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.522707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.523032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.523040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.523369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.523378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.523692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.523699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.524016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.524024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.524404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.524411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.524755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.524763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.524951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.524958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.525291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.525298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.525618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.525626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.526012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.526021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.526336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.526343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.526657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.371 [2024-11-19 11:25:39.526665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.371 qpair failed and we were unable to recover it. 00:31:31.371 [2024-11-19 11:25:39.526979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.526987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.527289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.527298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.527510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.527518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.527854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.527865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.528057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.528065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.528248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.528256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.528530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.528538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=146850 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 146850 00:31:31.372 [2024-11-19 11:25:39.528934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.528943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:31.372 [2024-11-19 11:25:39.529254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.529262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 146850 ']' 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.372 [2024-11-19 11:25:39.529602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.529611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.372 [2024-11-19 11:25:39.529922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.529931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.372 [2024-11-19 11:25:39.530261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.530270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.372 11:25:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.372 [2024-11-19 11:25:39.530580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.530588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.530887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.530896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.531298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.531306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.531630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.531638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.531804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.531813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.532198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.532206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.532513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.532520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.532835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.532843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.532996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.533004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.533319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.533327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.533613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.533620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.533939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.533948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.534319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.534328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.534622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.534630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.534843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.534851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.535171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.535180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.535380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.535388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.535564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.535572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.535873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.535881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.536021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.536028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.536238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.372 [2024-11-19 11:25:39.536247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.372 qpair failed and we were unable to recover it. 00:31:31.372 [2024-11-19 11:25:39.536470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.536478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.536689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.536698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.536924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.536932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.537221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.537229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.537544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.537551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.537870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.537877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.537913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.537921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.538142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.538149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.538478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.538485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.538810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.538817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.539037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.539044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.539344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.539351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.539532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.539541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.539884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.539926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.540300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.540322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.540654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.540664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.541175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.541213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.541433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.541447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.541791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.541806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.542171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.542183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.542521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.542531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.542853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.542867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.543300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.543312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.543494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.543504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.543735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.543745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.544131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.544142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.544460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.544470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.544642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.544654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.545025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.545035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.545377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.545387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.545609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.545620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.545881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.545891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.546217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.546227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.546506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.546516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.546818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.546829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.547188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.547199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.547410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.547421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.547610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.373 [2024-11-19 11:25:39.547620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.373 qpair failed and we were unable to recover it. 00:31:31.373 [2024-11-19 11:25:39.547934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.547944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.548313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.548323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.548619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.548630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.548939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.548950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.549267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.549277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.549480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.549490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.549820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.549830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.550097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.550108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.550408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.550419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.550726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.550737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.550920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.550932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.551272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.551282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.551593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.551603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.551932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.551943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.552216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.552226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.552542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.552553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.552854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.552867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.553227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.553238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.553542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.553554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.553746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.553756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.554183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.554194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.554556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.554568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.554836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.554845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.555087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.555097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.555333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.555343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.555676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.555685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.555923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.555933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.556247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.556257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.556611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.556621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.556964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.556975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.557310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.557320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.557536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.557546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.557898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.557909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.558281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.558291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.558617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.558628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.559002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.559012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.559367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.559377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.559695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.559704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.374 [2024-11-19 11:25:39.560022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.374 [2024-11-19 11:25:39.560033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.374 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.560366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.560375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.560680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.560690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.560883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.560894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.561272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.561281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.561495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.561505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.561850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.561860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.562065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.562075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.562437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.562447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.562758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.562768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.562956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.562969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.563202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.563212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.563634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.563643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.564060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.564070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.564256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.564268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.564620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.564630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.564956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.564967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.565177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.565187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.565487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.565496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.565810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.565820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.566219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.566229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.566473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.566483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.566687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.566698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.567046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.567057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.567358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.567368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.567471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.567480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.567790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.567800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.568118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.568128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.568448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.568457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.568776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.568785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.569121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.569132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.569461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.569472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.569675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.569685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.570040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.570050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.375 [2024-11-19 11:25:39.570303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.375 [2024-11-19 11:25:39.570313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.375 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.570650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.570660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.571012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.571022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.571367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.571376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.571691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.571701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.572017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.572028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.572346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.572356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.572679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.572690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.572790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.572799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.573106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.573117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.573305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.573314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.573683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.573694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.574041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.574052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.574362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.574372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.574673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.574683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.575000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.575011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.575334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.575344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.575690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.575702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.576016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.576027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.576227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.576237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.576445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.576455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.576835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.576845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.577145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.577155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.577472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.577481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.577700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.577710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.578073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.578083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.578265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.578274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.578442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.578451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.578804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.578816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.579147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.579157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.579464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.579474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.579831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.579841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.580167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.580178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.580486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.580496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.580714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.580724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.580972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.580982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.581329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.581339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.581524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.581534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.376 [2024-11-19 11:25:39.581783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.376 [2024-11-19 11:25:39.581793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.376 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.582123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.582133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.582232] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:31:31.377 [2024-11-19 11:25:39.582283] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.377 [2024-11-19 11:25:39.582473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.582484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.582784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.582793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.582982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.582992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.583297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.583309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.583682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.583692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.584041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.584243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.584254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.584574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.584584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.584915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.584927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.585264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.585275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.585620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.585631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.585926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.585937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.586280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.586292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.586624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.586635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.586965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.586976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.587320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.587331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.587693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.587703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.588036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.588047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.588257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.588268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.588558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.588570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.588767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.588777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.588995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.589006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.589362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.589372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.589691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.589702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.590020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.590031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.590347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.590357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.590699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.590710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.591036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.591047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.591372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.591383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.591676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.591686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.591750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.591761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.591926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.591937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.592258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.592269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.592477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.592488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.592805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.592816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.593243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.377 [2024-11-19 11:25:39.593254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.377 qpair failed and we were unable to recover it. 00:31:31.377 [2024-11-19 11:25:39.593574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.593585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.593905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.593916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.594207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.594217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.594558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.594569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.594881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.594892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.595081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.595092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.595409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.595420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.595759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.595770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.596100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.596112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.596409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.596419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.596770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.596783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.597107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.597120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.597314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.597325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.597611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.597622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.597809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.597820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.598000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.598011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.598297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.598309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.598665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.598675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.598871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.598882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.599231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.599243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.599410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.599421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.599780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.599791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.600134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.600146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.600460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.600471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.600677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.600688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.601002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.601013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.601312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.601323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.601667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.601677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.601979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.601989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.602362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.602371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.602693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.602703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.603020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.603031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.603420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.603431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.603761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.603771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.604110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.604121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.604313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.604325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.604544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.604554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.604852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.378 [2024-11-19 11:25:39.604876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.378 qpair failed and we were unable to recover it. 00:31:31.378 [2024-11-19 11:25:39.605236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.605247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.605530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.605539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.605928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.605938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.606236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.606246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.606578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.606589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.606911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.606922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.607109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.607119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.607562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.607572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.607756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.607767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.608157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.608167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.608478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.608489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.608815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.608825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.609214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.609226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.609541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.609551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.609930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.609941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.610270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.610279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.610576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.610586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.610873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.610884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.611204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.611214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.611537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.611547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.611860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.611873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.612180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.612190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.612383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.612394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.612750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.612760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.612960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.612971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.613324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.613334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.613575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.613585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.613906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.613916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.614244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.614255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.614445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.614455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.614660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.614670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.615019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.615030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.615318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.615329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.615542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.615553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.615876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.615887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.616221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.616231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.616549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.616558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.616911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.616922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.379 qpair failed and we were unable to recover it. 00:31:31.379 [2024-11-19 11:25:39.617240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.379 [2024-11-19 11:25:39.617251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.617568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.617578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.617907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.617920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.618087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.618098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.618309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.618319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.618598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.618608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.618917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.618927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.619245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.619255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.619572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.619581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.619901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.619911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.620221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.620231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.620571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.620580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.620926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.620938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.621258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.621268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.621605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.621615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.621935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.621946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.622283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.622294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.622624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.622634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.622833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.622843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.623174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.623186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.623517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.623528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.623708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.623718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.624042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.624052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.624249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.624259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.624547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.624556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.624878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.624888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.625098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.625108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.625377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.625390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.625624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.625634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.625726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.625735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.625960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.625970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.626311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.626320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.626644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.626654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.626951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.380 [2024-11-19 11:25:39.626961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.380 qpair failed and we were unable to recover it. 00:31:31.380 [2024-11-19 11:25:39.627350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.627359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.627538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.627550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.627929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.627939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.628303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.628313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.628645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.628655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.628814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.628824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.629167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.629177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.629358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.629369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.629717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.629726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.629895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.629906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.630083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.630093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.630420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.630430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.630729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.630740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.631037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.631047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.631355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.631366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.631663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.631674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.631855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.631869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.632086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.632096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.632388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.632397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.632770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.632780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.633085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.633094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.633346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.633355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.633538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.633549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.633891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.633902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.634241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.634251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.634530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.634541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.634741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.634751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.634935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.634952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.635156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.635165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.635361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.635372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.635617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.635627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.635977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.635988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.636287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.636297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.636597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.636606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.636935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.636948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.637262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.637272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.637570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.637580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.637782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.381 [2024-11-19 11:25:39.637791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.381 qpair failed and we were unable to recover it. 00:31:31.381 [2024-11-19 11:25:39.638076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.638086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.638268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.638279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.638610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.638620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.638799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.638809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.639030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.639040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.639212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.639222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.639415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.639425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.639736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.639746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.640055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.640065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.640313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.640323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.640654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.640663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.640992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.641002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.641184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.641194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.641516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.641525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.641846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.641856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.642040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.642050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.642240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.642251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.642542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.642551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.642845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.642856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.643065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.643075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.643487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.643496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.643928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.643938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.644373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.644383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.644541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.644553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.644876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.644886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.645204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.645213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.645529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.645538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.645905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.645916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.646100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.646111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.646424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.646434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.646738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.646749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.647083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.647093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.647393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.647403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.647740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.647751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.648130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.648140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.648458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.648468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.648727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.648737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.648955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.648966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.382 [2024-11-19 11:25:39.649272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.382 [2024-11-19 11:25:39.649281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.382 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.649570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.649579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.649903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.649914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.650236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.650245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.650564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.650574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.650772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.650782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.651114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.651124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.651321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.651331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.651520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.651530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.651805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.651816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.652011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.652021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.652214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.652224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.652602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.652611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.652890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.652900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.653289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.653298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.653614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.653623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.653935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.653945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.654257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.654276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.654448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.654457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.654793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.654802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.655095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.655105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.655373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.655383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.655606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.655616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.655830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.655840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.656173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.656184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.656503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.656512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.656832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.656844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.657192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.657202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.657520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.657530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.657870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.657881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.658212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.658222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.658521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.658531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.658869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.658879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.659233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.659243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.659578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.659588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.659761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.659771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.660117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.660128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.660431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.660441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.383 [2024-11-19 11:25:39.660704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.383 [2024-11-19 11:25:39.660713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.383 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.661034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.661045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.661350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.661360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.661653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.661664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.661887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.661898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.662293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.662303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.662632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.662641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.662893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.662903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.663249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.663259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.663607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.663617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.663930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.663940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.664197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.664206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.664540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.664549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.664888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.665207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.665217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.665541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.665554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.665873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.665882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.666257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.666267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.666553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.666563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.666894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.666904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.667250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.667259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.667566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.667575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.667897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.667907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.668072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.668084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.668419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.668429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.668721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.668731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.669036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.669046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.669123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.669133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.669437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.669447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.669769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.669779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.670085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.670095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.670415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.670424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.670743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.670753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.671059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.671070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.384 qpair failed and we were unable to recover it. 00:31:31.384 [2024-11-19 11:25:39.671384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.384 [2024-11-19 11:25:39.671393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.671748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.671758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.672106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.672117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.672437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.672447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.672785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.672795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.673158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.673168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.673339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.673349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.673541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.673551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.673856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.673874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.674071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.674082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.674408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.674418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.674735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.674745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.674939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.674949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.675299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.675308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.675462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.675471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.675681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.675690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.675891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.675902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.676302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.676312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.676485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.676495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.676817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.676828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.677243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.677252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.678168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.678192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.678537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.678550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.678897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.678908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.679076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.679086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.679414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.679423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.679754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.679764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.680081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.680092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.680372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.680382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.680679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.680689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.681034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.681043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.681349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.681359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.681675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.681685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.682003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.385 [2024-11-19 11:25:39.682013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.385 qpair failed and we were unable to recover it. 00:31:31.385 [2024-11-19 11:25:39.682319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.386 [2024-11-19 11:25:39.682328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.386 qpair failed and we were unable to recover it. 00:31:31.386 [2024-11-19 11:25:39.682490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.386 [2024-11-19 11:25:39.682500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.386 qpair failed and we were unable to recover it. 00:31:31.714 [2024-11-19 11:25:39.682762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.714 [2024-11-19 11:25:39.682773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.714 qpair failed and we were unable to recover it. 00:31:31.714 [2024-11-19 11:25:39.682991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.714 [2024-11-19 11:25:39.683002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.714 qpair failed and we were unable to recover it. 00:31:31.714 [2024-11-19 11:25:39.683210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.714 [2024-11-19 11:25:39.683220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.714 qpair failed and we were unable to recover it. 00:31:31.714 [2024-11-19 11:25:39.683503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.714 [2024-11-19 11:25:39.683514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.714 qpair failed and we were unable to recover it. 00:31:31.714 [2024-11-19 11:25:39.683879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.714 [2024-11-19 11:25:39.683890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.714 qpair failed and we were unable to recover it. 00:31:31.714 [2024-11-19 11:25:39.684076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.714 [2024-11-19 11:25:39.684085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.714 qpair failed and we were unable to recover it. 00:31:31.714 [2024-11-19 11:25:39.684426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.684436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.684738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.684749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.685096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.685106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.685298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.685309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.685601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.685611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.685951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.685961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.686291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.686301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.686624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.686636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.686928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.686938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.687148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.687158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.687445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.687456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.687867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.687878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.688182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.688192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.688508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.688519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.688838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.688848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.689093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.689104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.689339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.689349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.689665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.689675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.689991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.690002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.690324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.690335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.690500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.690510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.690733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.690743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.691049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.691059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.691221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.715 [2024-11-19 11:25:39.691405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.691414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.691729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.691739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.691796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.691805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.692133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.692143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.692438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.692449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.692777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.692787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.693152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.693162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.693485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.693495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.693826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.693836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.694032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.694042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.694357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.694368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.694683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.694695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.695013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.695024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.695222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.695232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.715 [2024-11-19 11:25:39.695657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.715 [2024-11-19 11:25:39.695667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.715 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.695968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.695979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.696164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.696173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.696480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.696491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.696850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.696866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.697204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.697214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.697509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.697519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.697725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.697734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.698048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.698058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.698386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.698396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.698572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.698582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.698910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.698921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.699091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.699102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.699386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.699396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.699599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.699609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.699933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.699945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.700250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.700260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.700575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.700584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.700904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.700914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.701172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.701182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.701526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.701536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.701831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.701841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.702035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.702046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.702358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.702368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.702602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.702611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.702892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.702902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.703218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.703227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.703510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.703520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.703821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.703831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.704159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.704169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.704355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.704365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.704665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.704675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.704871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.704881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.705186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.705195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.705395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.705405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.705758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.705767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.706136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.706146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.706468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.716 [2024-11-19 11:25:39.706477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.716 qpair failed and we were unable to recover it. 00:31:31.716 [2024-11-19 11:25:39.706888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.706901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.707092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.707101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.707406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.707416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.707741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.707751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.707942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.707952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.708179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.708188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.708520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.708530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.708856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.708880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.709199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.709209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.709295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.709305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.709488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.709497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.709804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.709814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.710038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.710048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.710290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.710300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.710630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.710640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.711028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.711038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.711374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.711384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.711700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.711710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.711889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.711899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.712072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.712083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.712360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.712370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.712681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.712691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.712847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.712857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.713205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.713215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.713507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.713518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.713833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.713843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.714179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.714190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.714396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.714408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.714723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.714733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.715033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.715043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.715442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.715452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.715774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.715784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.716194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.716204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.716498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.716508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.716768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.716778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.716973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.716983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.717307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.717316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.717 [2024-11-19 11:25:39.717528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.717 [2024-11-19 11:25:39.717538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.717 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.717850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.717860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.718239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.718250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.718430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.718439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.718719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.718729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.718939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.718949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.719243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.719253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.719439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.719449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.719680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.719690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.719967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.719977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.720280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.720290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.720594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.720603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.720909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.720920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.721217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.721227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.721514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.721525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.721715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.721726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.722127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.722138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.722428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.722438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.722730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.722740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.722912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.722922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.723163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.723172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.723509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.723519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.723805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.723814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.724107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.724117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.724409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.724419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.724730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.724739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.725116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.725126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.725449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.725459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.725771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.725781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.726098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.726109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.726489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.726499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.726812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.726824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.727115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.727125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.727444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.727453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.727786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.727796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.727982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.727992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 qpair failed and we were unable to recover it. 00:31:31.718 [2024-11-19 11:25:39.728292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.718 [2024-11-19 11:25:39.728320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.718 [2024-11-19 11:25:39.728327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.718 [2024-11-19 11:25:39.728328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.718 [2024-11-19 11:25:39.728334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.718 [2024-11-19 11:25:39.728338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.718 [2024-11-19 11:25:39.728341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.728657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.728667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.728982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.728991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.729316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.729326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.729603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.729613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.729788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.729799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.729889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:31.719 [2024-11-19 11:25:39.730004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:31.719 [2024-11-19 11:25:39.730153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.730165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.730172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:31.719 [2024-11-19 11:25:39.730172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:31.719 [2024-11-19 11:25:39.730459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.730469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.730845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.730855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.731189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.731199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.731494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.731503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.731686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.731696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.731763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.731774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.732093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.732105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.732417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.732428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.732778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.732788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.733108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.733119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.733328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.733339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.733656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.733666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.733838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.733853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.734187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.734198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.734517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.734528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.734844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.734855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.735161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.735172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.735501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.735512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.735812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.735823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.736205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.736216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.736444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.736454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.736644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.736654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.736874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.736885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.737125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.737135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.719 [2024-11-19 11:25:39.737477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.719 [2024-11-19 11:25:39.737487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.719 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.737820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.737830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.738152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.738162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.738446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.738456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.738765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.738775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.738963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.738973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.739173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.739182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.739500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.739510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.739800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.739810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.740129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.740140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.740305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.740314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.740682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.740692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.740762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.740771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.741058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.741068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.741251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.741262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.741444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.741456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.741732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.741741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.742079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.742089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.742482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.742492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.742676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.742686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.742877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.742887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.743075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.743084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.743393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.743403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.743725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.743735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.743930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.743940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.744264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.744274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.744457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.744467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.744631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.744641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.744858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.744871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.745170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.745180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.745500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.745510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.745710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.745720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.745786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.745796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.746595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.746619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.746956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.746968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.747313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.747323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.747647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.747658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.747969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.747980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.720 [2024-11-19 11:25:39.748326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.720 [2024-11-19 11:25:39.748337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.720 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.748678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.748688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.748874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.748885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.749208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.749218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.749540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.749555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.749738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.749748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.750206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.750217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.750419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.750428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.750749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.750759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.751062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.751072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.751243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.751253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.751535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.751544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.751866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.751877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.752182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.752193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.752504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.752514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.752830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.752840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.753185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.753195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.753486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.753495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.753694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.753706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.753962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.753973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.754292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.754303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.754497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.754506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.754909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.754919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.755225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.755235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.755524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.755536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.755720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.755730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.755919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.755929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.756311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.756321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.756502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.756512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.756845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.756855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.757205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.757215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.757278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.757288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.757415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.757424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.757726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.757737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.758053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.758064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.758349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.758359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.758545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.758555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.758841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.721 [2024-11-19 11:25:39.758851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.721 qpair failed and we were unable to recover it. 00:31:31.721 [2024-11-19 11:25:39.759179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.759190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.759353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.759362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.759455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.759464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.759665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.759675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.760008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.760018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.760322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.760332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.760527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.760538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.760708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.760722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.761041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.761052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.761350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.761360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.761630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.761640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.761952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.761962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.762134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.762145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.762456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.762466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.762642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.762652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.763056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.763067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.763262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.763272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.763668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.763678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.763839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.763849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.764225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.764236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.764529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.764539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.764907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.764917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.765228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.765238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.765408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.765418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.765792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.765802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.765974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.765985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.766317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.766328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.766662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.766672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.766890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.766900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.767322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.767332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.767509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.767519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.767812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.767822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.768135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.768145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.768437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.768448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.768611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.768621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.768942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.768953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.769130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.769139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.769326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.722 [2024-11-19 11:25:39.769337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.722 qpair failed and we were unable to recover it. 00:31:31.722 [2024-11-19 11:25:39.769558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.769569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.769764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.769774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.770106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.770118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.770307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.770316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.770643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.770653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.770961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.770972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.771262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.771273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.771587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.771598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.771930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.771942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.772181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.772191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.772517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.772530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.772820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.772832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.773088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.773102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.773415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.773425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.773597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.773607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.773801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.773810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.774107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.774118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.774427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.774437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.774628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.774638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.774937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.774947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.775242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.775253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.775426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.775437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.775654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.775667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.775854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.775867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.776190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.776201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.776252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.776261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.776458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.776470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.776695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.776706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.777034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.777046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.777376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.777387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.777724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.777736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.777944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.777955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.778171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.778181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.778331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.778341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.778626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.778637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.779001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.779012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.779331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.779341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.723 [2024-11-19 11:25:39.779651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-11-19 11:25:39.779662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.723 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.779848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.779858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.780159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.780169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.780412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.780421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.780749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.780758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.780916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.780926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.781252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.781262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.781559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.781569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.781764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.781775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.781973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.781983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.782314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.782324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.782509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.782519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.782844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.782854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.783201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.783211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.783518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.783528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.783715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.783725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.783902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.783912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.784119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.784129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.784344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.784355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.784700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.784711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.784897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.784907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.785070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.785080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.785249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.785261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.785570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.785580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.785897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.785908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.786209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.786220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.786300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.786310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.786623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.724 [2024-11-19 11:25:39.786633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.724 qpair failed and we were unable to recover it. 00:31:31.724 [2024-11-19 11:25:39.787023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.787033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.787342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.787351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.787686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.787695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.787998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.788009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.788191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.788201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.788490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.788500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.788835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.788845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.789238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.789248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.789570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.789580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.789753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.789763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.789949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.789959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.790138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.790149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.790480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.790490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.790695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.790707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.790871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.790883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.791209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.791220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.791538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.791549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.791712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.791722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.792047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.792057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.792203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.792213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.792576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.792586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.792785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.792795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.792999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.793010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.793228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.793237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.793455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.793464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.793770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.793780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.794089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.794100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.794432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.794442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.794758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.794768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.795057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.795068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.795409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.795419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.795731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.795741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.796072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.796082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.796308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.796317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.796530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.796541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.796974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.796985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.797165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.797175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.797542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.797553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.725 qpair failed and we were unable to recover it. 00:31:31.725 [2024-11-19 11:25:39.797715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.725 [2024-11-19 11:25:39.797725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.797919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.797929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.798263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.798273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.798603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.798613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.798890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.798902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.799093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.799103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.799430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.799440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.799802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.799811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.800145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.800155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.800420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.800430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.800729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.800739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.800789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.800799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.801155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.801165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.801352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.801362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.801674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.801684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.801858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.801879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.802172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.802186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.802354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.802363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.802594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.802604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.802946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.802957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.803241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.803251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.803573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.803583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.803869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.803879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.804229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.804240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.804448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.804459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.804788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.804799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.805112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.805122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.805444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.805455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.805612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.805622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.805943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.805953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.806017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.806026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.806245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.806255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.806581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.806592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.806769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.806780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.806953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.806964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.807163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.807173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.807572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.807583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.807780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.807790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.726 [2024-11-19 11:25:39.808074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.726 [2024-11-19 11:25:39.808085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.726 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.808257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.808267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.808616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.808627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.808829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.808840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.809016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.809027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.809397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.809410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.809598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.809609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.809792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.809803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.810066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.810077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.810356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.810367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.810648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.810659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.810955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.810965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.811214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.811224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.811442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.811458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.811639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.811649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.811938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.811949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.812271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.812281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.812586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.812596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.812784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.812794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.813157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.813169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.813477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.813486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.813668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.813679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.813859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.813884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.814060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.814069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.814255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.814265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.814610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.814620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.814807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.814818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.814990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.815001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.815313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.815323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.815612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.815622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.815935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.815945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.816269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.816279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.816325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.816334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.816647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.816657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.816817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.816827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.817216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.817226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.817507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.817517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.817841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.727 [2024-11-19 11:25:39.817851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.727 qpair failed and we were unable to recover it. 00:31:31.727 [2024-11-19 11:25:39.817986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.817997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.818264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.818279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.818467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.818477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.818670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.818681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.818892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.818903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.819211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.819221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.819505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.819514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.819705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.819715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.819892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.819904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.820194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.820204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.820538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.820548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.820846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.820856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.821225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.821417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.821426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.821754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.821763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.822129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.822139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.822451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.822461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.822737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.822747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.823025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.823036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.823344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.823354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.823648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.823657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.823860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.823874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.824067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.824077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.824122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.824131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.824457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.824467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.824769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.824778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.825086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.825097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.825275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.825285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.825588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.825597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.825938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.825951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.826287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.826297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.826622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.826632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.826948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.728 [2024-11-19 11:25:39.826958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.728 qpair failed and we were unable to recover it. 00:31:31.728 [2024-11-19 11:25:39.827168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.827178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.827503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.827513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.827683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.827695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.827998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.828008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.828338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.828348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.828656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.828666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.828979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.828989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.829215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.829225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.829532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.829542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.829850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.829860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.830072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.830082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.830492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.830502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.830666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.830675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.830952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.830962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.831191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.831201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.831536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.831546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.831889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.831899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.832203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.832214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.832415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.832425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.832766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.832776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.833082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.833092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.833416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.833426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.833776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.833787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.834077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.834087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.834287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.834297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.834592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.834603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.834781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.834792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.835119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.835130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.835332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.835342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.835657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.835667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.835860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.835874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.836176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.836186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.729 [2024-11-19 11:25:39.836323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.729 [2024-11-19 11:25:39.836333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.729 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.836556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.836566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.836859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.836873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.837222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.837232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.837423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.837432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.837810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.837821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.837905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.837915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.838144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.838153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.838325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.838334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.838623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.838633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.838941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.838952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.839248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.839260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.839582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.839592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.839927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.839937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.840248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.840258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.840570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.840579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.840889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.840900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.841233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.841242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.841437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.841447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.841659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.841669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.842007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.842018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.842217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.842226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.842583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.842592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.842766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.842776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.842952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.842962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.843267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.843277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.843606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.843616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.843793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.843804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.844010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.844019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.844322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.844332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.844658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.844668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.844852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.844866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.845163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.845173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.845489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.845499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.845662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.845672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.845909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.845919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.846116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.846127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.846438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.846448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.730 qpair failed and we were unable to recover it. 00:31:31.730 [2024-11-19 11:25:39.846769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.730 [2024-11-19 11:25:39.846781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.846949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.846960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.847307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.847318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.847612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.847623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.847932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.847942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.848246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.848257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.848380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.848389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.848702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.848712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.848998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.849009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.849298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.849308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.849630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.849640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.849959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.849970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.850275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.850286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.850465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.850475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.850793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.850803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.851082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.851092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.851484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.851494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.851797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.851807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.852154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.852164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.852563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.852573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.852756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.852766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.852917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.852927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.853149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.853159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.853469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.853479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.853665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.853675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.853867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.853878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.854093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.854102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.854394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.854404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.854587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.854597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.854885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.854895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.855296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.855306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.855617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.855627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.855805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.855815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.856102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.731 [2024-11-19 11:25:39.856113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.731 qpair failed and we were unable to recover it. 00:31:31.731 [2024-11-19 11:25:39.856430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.856440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.856755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.856764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.857079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.857090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.857403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.857413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.857726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.857736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.857778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.857787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.858002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.858012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.858365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.858380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.858782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.858792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.859015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.859025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.859297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.859307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.859617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.859627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.859939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.859950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.860264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.860274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.860625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.860636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.860948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.860958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.861272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.861283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.861473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.861484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.861649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.861660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.861982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.861993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.862179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.862191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.862504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.862515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.862835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.862845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.863196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.863207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.863517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.863528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.863836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.863846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.864027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.864038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.864346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.864356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.864668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.864678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.865047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.865059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.865272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.865282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.865608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.865618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.865809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.865819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.865873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.865883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.866209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.866219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.866424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.866436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.866612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.866622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.866933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.866944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.732 qpair failed and we were unable to recover it. 00:31:31.732 [2024-11-19 11:25:39.867148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.732 [2024-11-19 11:25:39.867158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.867334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.867345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.867688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.867698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.868020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.868031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.868313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.868324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.868506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.868516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.868725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.868735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.868901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.868912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.869232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.869241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.869561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.869570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.869865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.869876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.870061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.870071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.870353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.870362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.870672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.870681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.870978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.870988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.871273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.871283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.871560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.871570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.871882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.871892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.872070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.872079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.872119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.872128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.872393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.872402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.872712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.872722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.872892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.872903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.873134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.873145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.873487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.873497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.873666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.873675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.873714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.873723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.873968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.873978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.874259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.874269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.733 [2024-11-19 11:25:39.874601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.733 [2024-11-19 11:25:39.874610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.733 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.874796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.874808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.875148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.875158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.875463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.875472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.875779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.875789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.876025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.876035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.876084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.876093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.876273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.876283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.876628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.876640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.876981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.876992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.877179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.877190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.877348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.877357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.877643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.877652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.877980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.877989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.878289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.878299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.878576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.878586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.878908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.878918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.879244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.879254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.879605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.879614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.879772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.879782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.879938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.879948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.880153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.880164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.880470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.880480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.880788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.880798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.881178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.881188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.881389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.881399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.881726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.881735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.882069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.882080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.882392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.882401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.882701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.882711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.883026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.883036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.883342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.883352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.883663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.883673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.883973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.883983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.884292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.884301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.884590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.884600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.884921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.884931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.885149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.885159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.734 [2024-11-19 11:25:39.885548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.734 [2024-11-19 11:25:39.885558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.734 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.885871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.885881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.886215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.886224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.886510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.886520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.886842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.886852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.887251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.887261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.887584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.887594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.887874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.887885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.888173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.888183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.888339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.888348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.888671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.888681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.888984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.888994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.889310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.889319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.889608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.889619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.889805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.889815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.890111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.890122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.890436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.890446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.890734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.890744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.891164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.891174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.891463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.891473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.891650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.891660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.891951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.891961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.892149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.892159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.892364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.892373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.892701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.892711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.892904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.892915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.893234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.893244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.893548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.893557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.893883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.893893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.894081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.894090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.894372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.894382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.894690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.894700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.895027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.895037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.895349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.895359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.735 qpair failed and we were unable to recover it. 00:31:31.735 [2024-11-19 11:25:39.895517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.735 [2024-11-19 11:25:39.895527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.895856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.895870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.896032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.896041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.896306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.896315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.896641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.896652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.896953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.896964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.897298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.897308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.897493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.897503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.897670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.897680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.897955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.897966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.898276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.898286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.898594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.898604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.898779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.898789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.899003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.899014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.899369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.899380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.899718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.899727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.899899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.899908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.900090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.900100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.900401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.900410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.900729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.900738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.901044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.901054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.901240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.901250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.901435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.901445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.901756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.901767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.901971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.901982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.902179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.902189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.902497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.902507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.902865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.902875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.902945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.902953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.903277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.903292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.903474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.903485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.903637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.903647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.903970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.903980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.904268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.904277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.904479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.904489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.904729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.904738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.905034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.905044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.905332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.905341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.736 qpair failed and we were unable to recover it. 00:31:31.736 [2024-11-19 11:25:39.905681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.736 [2024-11-19 11:25:39.905691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.905983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.905993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.906167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.906176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.906498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.906508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.906909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.906919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.907209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.907219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.907514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.907523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.907563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.907574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.907854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.907871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.908188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.908198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.908514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.908523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.908833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.908843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.909149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.909159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.909453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.909463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.909771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.909781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.910097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.910108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.910429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.910440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.910571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.910580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.910905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.910915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.911244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.911254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.911572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.911581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.911744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.911754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.912128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.912139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.912453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.912463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.912669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.912679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.912828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.912838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.913032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.913042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.913380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.913390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.913714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.913723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.913926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.913936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.914251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.914261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.914613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.914623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.914928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.914939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.915106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.915115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.915331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.915345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.915664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.915674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.915870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.915881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.916084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.916093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.737 qpair failed and we were unable to recover it. 00:31:31.737 [2024-11-19 11:25:39.916377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.737 [2024-11-19 11:25:39.916387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.916696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.916707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.916978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.916988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.917277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.917287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.917500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.917510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.917922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.917932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.918093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.918102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.918401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.918411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.918681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.918691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.919005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.919016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.919211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.919221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.919563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.919572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.919884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.919894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.920210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.920221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.920536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.920546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.920711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.920720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.921056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.921066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.921291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.921301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.921473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.921483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.921850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.921860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.922047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.922057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.922389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.922399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.922698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.922708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.922923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.922933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.923282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.923292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.923381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.923392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.923674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.923683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.923846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.923856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.924177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.924187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.924522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.924532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.924859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.924871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.925163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.925173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.925467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.925477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.738 [2024-11-19 11:25:39.925750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.738 [2024-11-19 11:25:39.925760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.738 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.926076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.926086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.926397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.926408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.926750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.927055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.927068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.927395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.927405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.927717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.927727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.928035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.928046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.928356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.928366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.928688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.928698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.929012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.929022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.929316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.929326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.929501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.929511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.929702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.929712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.930020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.930030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.930365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.930375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.930533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.930542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.930871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.930881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.931087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.931097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.931174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.931183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.931428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.931437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.931722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.931731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.932055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.932065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.932384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.932394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.932608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.932617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.933011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.933021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.933334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.933344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.933662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.933671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.933949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.933959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.934291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.934300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.934585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.934595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.934896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.934908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.935279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.935289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.935600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.935609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.935927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.935938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.936028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.936037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.936208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.936217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.936398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.936408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.936739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.739 [2024-11-19 11:25:39.936749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.739 qpair failed and we were unable to recover it. 00:31:31.739 [2024-11-19 11:25:39.937043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.937053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.937345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.937354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.937645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.937656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.937973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.937983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.938158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.938167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.938371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.938380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.938652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.938661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.938830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.938841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.938982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.938993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.939274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.939283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.939451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.939461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.939777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.939786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.940092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.940103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.940455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.940464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.940534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.940543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.940737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.940747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.941082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.941092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.941281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.941601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.941610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.941790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.941799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.942128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.942139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.942444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.942453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.942780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.942790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.943101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.943111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.943429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.943439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.943484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.943493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.943677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.943688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.943739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.943749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.943926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.943937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.944149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.944158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.944468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.944479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.944795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.944806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.945108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.945117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.945312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.945324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.945618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.945629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.740 [2024-11-19 11:25:39.945936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.740 [2024-11-19 11:25:39.945946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.740 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.946278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.946288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.946595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.946605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.946893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.946904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.947216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.947225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.947522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.947532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.947876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.947886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.948194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.948204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.948358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.948368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.948667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.948677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.949054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.949064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.949361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.949371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.949541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.949550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.949626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.949635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.949833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.949842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.950044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.950056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.950361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.950371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.950659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.950670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.950900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.950910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.951205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.951214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.951259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.951268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.951561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.951571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.951895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.951905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.952323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.952333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.952641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.952651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.952837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.952850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.953025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.953035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.953085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.953094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.953294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.953303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.953611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.953621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.953920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.953931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.741 [2024-11-19 11:25:39.954130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.741 [2024-11-19 11:25:39.954140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.741 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.954452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.954462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.954804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.954814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.954976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.954986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.955216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.955226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.955542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.955551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.955686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.955695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.956014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.956025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.956205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.956215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.956523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.956533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.956834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.956843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.957222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.957232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.957392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.957402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.957677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.957687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.957852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.957864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.958176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.958185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.958507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.958517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.958683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.958693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.958868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.958878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.959108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.959118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.959443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.959452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.959772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.959782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.960094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.960105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.960291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.960300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.960632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.960642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.960956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.960967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.961312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.961322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.961630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.961640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.961821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.961830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.962237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.962247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.962596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.962605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.962921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.962930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.963093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.963103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.963425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.963435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.963608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.963617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.963899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.963911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.964242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.964252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.964552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.742 [2024-11-19 11:25:39.964561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.742 qpair failed and we were unable to recover it. 00:31:31.742 [2024-11-19 11:25:39.964879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.964889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.965093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.965103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.965271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.965282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.965433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.965443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.965712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.965722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.965903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.965913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.966235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.966244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.966576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.966586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.966773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.966783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.967068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.967077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.967370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.967380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.967574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.967585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.967748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.967758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.968085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.968097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.968430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.968441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.968755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.968765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.969053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.969064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.969381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.969391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.969592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.969602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.969942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.969952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.970178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.970188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.970562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.970572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.970872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.970890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.971200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.971210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.971498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.743 [2024-11-19 11:25:39.971508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.743 qpair failed and we were unable to recover it. 00:31:31.743 [2024-11-19 11:25:39.971828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.971838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.972151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.972162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.972364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.972375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.972569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.972579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.972970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.972980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.973281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.973291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.973452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.973462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.973644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.973653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.973854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.973867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.974040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.974049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.974250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.974259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.974569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.974578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.974887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.974897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.975225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.975235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.975508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.975517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.975693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.975703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.975904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.975914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.976177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.976187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.976370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.976380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.976593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.976602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.976821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.976831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.977023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.977033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.977464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.977473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.977663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.977672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.977950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.977961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.978250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.978260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.978578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.978589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.978661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.978670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.978970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.978981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.979164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.979174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.979577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.979588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.979879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.979889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.979931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.979940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.980237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.980246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.980641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.980650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.980836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.980845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.744 [2024-11-19 11:25:39.981136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.744 [2024-11-19 11:25:39.981146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.744 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.981338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.981348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.981556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.981566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.981892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.981902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.982212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.982224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.982394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.982405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.982595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.982604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.982910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.982921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.983315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.983325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.983646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.983656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.983838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.983848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.984087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.984097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.984416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.984426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.984719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.984729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.985115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.985125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.985408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.985419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.985730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.985740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.985922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.985932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.986278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.986288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.986667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.986677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.986893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.986903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.987177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.987187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.987505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.987516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.987703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.987712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.988028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.988039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.988393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.988403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.988706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.988717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.989076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.989086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.989287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.989296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.989617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.989627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.989792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.745 [2024-11-19 11:25:39.989803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.745 qpair failed and we were unable to recover it. 00:31:31.745 [2024-11-19 11:25:39.990003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.990018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.990313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.990324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.990712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.990722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.991074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.991084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.991373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.991382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.991591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.991600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.991920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.991930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.992107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.992117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.992302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.992311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.992621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.992631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.992972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.992982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.993226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.993236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.993445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.993454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.993775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.993785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.994140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.994152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.994449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.994459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.994770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.994780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.995076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.995086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.995420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.995429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.995602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.995612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.995946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.995956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.996275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.996284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.996570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.996581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.996890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.996900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.997059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.997069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.997401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.997411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.997737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.997747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.997971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.997981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.998297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.998307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.998621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.998631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.998979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.998989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.999299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.999309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:39.999619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:39.999629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:40.000014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:40.000024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:40.000302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:40.000313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:40.000633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:40.000643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.746 qpair failed and we were unable to recover it. 00:31:31.746 [2024-11-19 11:25:40.000857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.746 [2024-11-19 11:25:40.000871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.001072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.001083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.001288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.001297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.001460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.001469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.001741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.001759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.001994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.002006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.002881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.002898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.003168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.003178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.003413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.003422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.003802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.003812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.004031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.004042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.004394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.004403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.004704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.004715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.004900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.004909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.005108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.005117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.005323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.005333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.005708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.005718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.005903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.005913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.006015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.006024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.006118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.006128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.006290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.006300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.006517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.006527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.006860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.006876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.007143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.007153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.007475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.007484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.007798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.007809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.008124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.008136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.008423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.008432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.008727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.008736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.009058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.009068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.009364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.009374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.009720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.009730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.009936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.747 [2024-11-19 11:25:40.009948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.747 qpair failed and we were unable to recover it. 00:31:31.747 [2024-11-19 11:25:40.010376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.010386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.010438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.010448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.010735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.010744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.010882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.010893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.011129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.011138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.011361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.011370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.011538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.011548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.011849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.011859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.012048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.012058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.012363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.012374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.012477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.012485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.012687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.012696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.013017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.013028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.013246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.013258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.013472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.013482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.013844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.013871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.013983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.014000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.014054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.014068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.014258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.014272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.014480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.014495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.014614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.014629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.014775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.014786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.015036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.015047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.015237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.015246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.015490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.015501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.015781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.015791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.016164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.016175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.016401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.016411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.016734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.016745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.748 [2024-11-19 11:25:40.017072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.748 [2024-11-19 11:25:40.017083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.748 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.017265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.017274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.017652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.017662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.018008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.018019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.018208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.018218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.018440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.018449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.018845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.018855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.019074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.019085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.019414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.019423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.019751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.019762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.020059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.020069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.020389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.020401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.020596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.020606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.020932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.020942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.021120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.021129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.021305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.021314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.021527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.021538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.021718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.021727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.022048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.022058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.022384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.022394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.022567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.022577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.022892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.022903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.023232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.023242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.023288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.023298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.023525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.023535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.023871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.023881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.024163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.024173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.024487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.024497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.024826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.024836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.025143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.025153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.025344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.025354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.025682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.025692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.025882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.025892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.026265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.026275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.026598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.026608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.026788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.026798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.749 qpair failed and we were unable to recover it. 00:31:31.749 [2024-11-19 11:25:40.027122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.749 [2024-11-19 11:25:40.027132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.027473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.027483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.027781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.027791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.028019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.028029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.028360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.028370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.028552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.028563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.028720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.028732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.028779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.028789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.029095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.029106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.029298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.029308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.029473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.029483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.029525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.029534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.029855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.029876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.030172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.030182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.030467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.030477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.030813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.030823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.031160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.031172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.031482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.031492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.031688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.031699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.032032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.032042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:31.750 [2024-11-19 11:25:40.032100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.750 [2024-11-19 11:25:40.032110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:31.750 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.032505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.032515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.032698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.032709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.032902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.032912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.033075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.033084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.033269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.033278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.033477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.033486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.033826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.033836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.034141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.034153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.034461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.034472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.034772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.030 [2024-11-19 11:25:40.034783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.030 qpair failed and we were unable to recover it. 00:31:32.030 [2024-11-19 11:25:40.035192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.035202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.035385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.035398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.035585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.035596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.035906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.035916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.036211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.036220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.036466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.036477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.036666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.036676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.036723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.036732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.036942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.036953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.037276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.037285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.037602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.037613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.037772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.037782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.038018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.038032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.038356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.038366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.038658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.038669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.038982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.038992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.039217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.039228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.039437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.039447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.039770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.039780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.039865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.039875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.040053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.040064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.040249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.040259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.040603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.040613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.040780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.040790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.041078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.041089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.041432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.041442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.041757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.041767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.042065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.042076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.042400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.042411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.042721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.042731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.042822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.042832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.043054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.043065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.043380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.043391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.043590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.043599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.043829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.043839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.044083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.044094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.044318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.044328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.044403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.031 [2024-11-19 11:25:40.044412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.031 qpair failed and we were unable to recover it. 00:31:32.031 [2024-11-19 11:25:40.044644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.044654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.044711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.044720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.044802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.044812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.044935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.044946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.045154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.045164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.045245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.045254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.045368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.045378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.045659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.045669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.045903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.045914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.046184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.046197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.046387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.046398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.046699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.046710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.047047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.047057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.047262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.047271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.047618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.047628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.047852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.047868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.048213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.048223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.048418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.048428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.048541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.048551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.048618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.048819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.048829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.049184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.049194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.049498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.049508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.049872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.049883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.050249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.050260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.050610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.050620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.050699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.050708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.051041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.051052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.051228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.051238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.051556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.051566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.051727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.051737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.052131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.052141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.052437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.052448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.052642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.052652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.052833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.052843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.053109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.053119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.053458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.053468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.053812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.032 [2024-11-19 11:25:40.053823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.032 qpair failed and we were unable to recover it. 00:31:32.032 [2024-11-19 11:25:40.054144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.054154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.054360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.054370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.054666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.054676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.054908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.054917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.055224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.055234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.055581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.055591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.055912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.055922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.056092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.056102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.056436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.056446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.056764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.056773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.057006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.057016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.057215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.057226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.057291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.057301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.057582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.057592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.057641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.057651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.057980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.057991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.058289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.058299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.058584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.058595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.058915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.058925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.059194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.059204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.059386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.059397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.059754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.059765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.060086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.060096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.060378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.060388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.060455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.060465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.060667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.060678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.060851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.060865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.061142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.061153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.061342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.061352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.061669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.061679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.061983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.061993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.062283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.062293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.062617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.062627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.062700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.062710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.062898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.062908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.063103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.063113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.063311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.063328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.063643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.033 [2024-11-19 11:25:40.063653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.033 qpair failed and we were unable to recover it. 00:31:32.033 [2024-11-19 11:25:40.063942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.063953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.064265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.064275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.064554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.064565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.064872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.064882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.065214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.065223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.065435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.065444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.065617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.065627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.065992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.066004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.066327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.066337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.066534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.066544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.066857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.066871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.067196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.067207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.067385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.067395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.067587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.067596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.067937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.067948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.068243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.068253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.068522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.068531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.068718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.068729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.068806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.068816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.069137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.069148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.069490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.069500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.069702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.069714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.070002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.070013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.070298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.070310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.070612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.070623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.070936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.070946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.071002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.071012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.071281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.071290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.071622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.071632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.071853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.071866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.072067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.072077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.072244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.072254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.072588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.072599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.072757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.072767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.073083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.073093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.073263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.073274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.073357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.073367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.034 [2024-11-19 11:25:40.073565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.034 [2024-11-19 11:25:40.073575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.034 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.073858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.073872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.074044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.074053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.074359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.074369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.074531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.074541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.074739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.074749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.075049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.075060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.075463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.075473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.075757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.075767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.075964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.075975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.076154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.076164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.076347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.076357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.076650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.076661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.076792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.076801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.076995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.077006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.077282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.077293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.077487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.077498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.077673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.077683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.077860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.077875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.078118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.078128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.078315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.078325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.078523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.078534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.078874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.078885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.079197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.079208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.079400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.079411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.079608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.079618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.079994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.080005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.080353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.080363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.080653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.080663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.080977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.080988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.081294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.081304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.081569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.081580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.035 [2024-11-19 11:25:40.081894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.035 [2024-11-19 11:25:40.081904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.035 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.082257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.082268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.082463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.082473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.082668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.082678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.083057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.083068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.083383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.083393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.083638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.083649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.084001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.084011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.084342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.084352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.084698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.084708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.085005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.085015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.085242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.085252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.085345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.085355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.085579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.085589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.085760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.085770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.085881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.085892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.086142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.086152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.086488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.086498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.086719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.086730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.086935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.086945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.087277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.087287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.087332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.087342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.087661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.087671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.087880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.087890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.088231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.088240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.088606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.088617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.088842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.088852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.089141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.089152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.089461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.089471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.089741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.089754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.089975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.089989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.090156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.090166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.090523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.090533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.090838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.090848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.091031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.091041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.091244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.091260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.091590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.091600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.091884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.091894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.036 qpair failed and we were unable to recover it. 00:31:32.036 [2024-11-19 11:25:40.092206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.036 [2024-11-19 11:25:40.092217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.092497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.092508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.092764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.092774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.092834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.092844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.093180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.093191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.093545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.093555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.093826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.093836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.094170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.094180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.094354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.094364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.094563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.094576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.094849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.094860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.095196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.095206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.095508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.095519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.095740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.095751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.096082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.096093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.096375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.096385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.096565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.096574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.096790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.096801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.097099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.097111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.097300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.097311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.097499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.097510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.097708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.097720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.098029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.098039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.098329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.098338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.098585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.098594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.098772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.098783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.099057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.099068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.099376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.099387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.099584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.099593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.099940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.099951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.100234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.100244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.100580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.100922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.100932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.101034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.101044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.101359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.101369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.101683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.101693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.101979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.101992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.102175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.102186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.037 [2024-11-19 11:25:40.102421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.037 [2024-11-19 11:25:40.102430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.037 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.102630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.102640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.102795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.102806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.102991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.103001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.103308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.103318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.103670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.103679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.103851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.103867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.104042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.104052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.104371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.104382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.104664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.104673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.104877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.104890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.104999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.105009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.105186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.105196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.105369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.105385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.105829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.105839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.106010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.106023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.106076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.106087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.106393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.106403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.106602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.106613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.106957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.106968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.107268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.107278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.107611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.107622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.107904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.107914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.108247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.108257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.108467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.108477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.108760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.108770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.108946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.108958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.109173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.109183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.109406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.109417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.109577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.109588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.109881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.109892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.110120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.110130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.110420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.110430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.110608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.110618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.110928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.110938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.111284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.111294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.111481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.111490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.111668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.111677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.112008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.038 [2024-11-19 11:25:40.112018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.038 qpair failed and we were unable to recover it. 00:31:32.038 [2024-11-19 11:25:40.112247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.112259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.112583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.112594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.112910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.112922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.113115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.113125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.113176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.113186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.113302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.113313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.113582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.113593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.113786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.113804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.114138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.114148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.114332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.114342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.114396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.114406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.114707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.114716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.114996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.115005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.115293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.115302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.115466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.115476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.115667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.115677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.115842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.115852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.116204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.116214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.116496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.116505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.116805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.116814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.117125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.117135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.117185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.117195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.117514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.117525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.117725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.117736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.118034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.118044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.118240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.118259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.118584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.118593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.118782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.118799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.119090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.119101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.119396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.119406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.119728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.119738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.120043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.120054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.120245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.120255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.120555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.120565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.039 [2024-11-19 11:25:40.120757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.039 [2024-11-19 11:25:40.120768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.039 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.121139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.121149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.121193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.121202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.121513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.121523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.121833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.121843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.122016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.122026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.122201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.122212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.122526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.122536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.122715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.122726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.122912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.122923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.123108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.123118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.123422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.123431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.123782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.123792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.124179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.124189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.124474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.124484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.124544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.124554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.124855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.124870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.125213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.125222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.125492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.125503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.125828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.125838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.126255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.126265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.126579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.126589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.126639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.126649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.126931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.126941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.127281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.127291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.127565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.127575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.127899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.127909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.128086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.128097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.128394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.128403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.128728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.128738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.128788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.128797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.128986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.128997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.129284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.129294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.129586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.129595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.129868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.129880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.130198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.130209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.040 qpair failed and we were unable to recover it. 00:31:32.040 [2024-11-19 11:25:40.130369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.040 [2024-11-19 11:25:40.130379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.130627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.130636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.130923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.130933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.131281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.131291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.131601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.131611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.131783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.131793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.132080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.132089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.132302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.132312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.132389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.132398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.132773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.132782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.132903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.132913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.132994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.133014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.133384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.133395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.133468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.133478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.133813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.133823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.134137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.134147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.134364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.134373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.134551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.134560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.134679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.134689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.134760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.134771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.134889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.134899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.135182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.135192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.135665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.135675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.136018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.136028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.136361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.136370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.136670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.136682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.136980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.136990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.137249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.137259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.137573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.137583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.137903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.137912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.138219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.138229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.138521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.138532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.138833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.138843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.139052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.139063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.139390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.139400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.139604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.139613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.139921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.139931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.140219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.140229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.041 [2024-11-19 11:25:40.140535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.041 [2024-11-19 11:25:40.140545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.041 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.140850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.140860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.141154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.141164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.141550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.141560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.141871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.141882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.142212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.142223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.142527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.142538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.142923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.142934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.143250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.143259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.143531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.143541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.143748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.143758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.144108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.144118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.144435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.144445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.144744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.144754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.145035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.145045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.145340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.145350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.145527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.145536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.145763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.145773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.146083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.146094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.146288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.146297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.146482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.146498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.146664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.146674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.146941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.146952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.147241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.147251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.147548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.147558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.147859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.147873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.148176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.148187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.148505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.148514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.148826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.148838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.149152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.149163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.149444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.149454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.149761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.149770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.149946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.149956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.150251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.150260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.150579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.150588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.150639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.150649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.042 [2024-11-19 11:25:40.150923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.042 [2024-11-19 11:25:40.150933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.042 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.151146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.151157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.151452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.151462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.151736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.151745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.152036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.152046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.152371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.152381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.152699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.152709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.153024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.153034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.153198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.153208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.153429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.153438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.153803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.153813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.154009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.154020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.154292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.154301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.154605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.154614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.154923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.154933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.155143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.155152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.155486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.155496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.155798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.155809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.156126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.156136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.156321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.156332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.156500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.156510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.156730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.156741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.156927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.156937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.157254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.157264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.157596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.157605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.157979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.157990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.158286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.158296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.158593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.158603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.158899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.158910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.158981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.158991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.159145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.159154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.159460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.159469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.159629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.159639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.159987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.159998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.160294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.160304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.160615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.160625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.160757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.160766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.160947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.160957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.161242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.043 [2024-11-19 11:25:40.161252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.043 qpair failed and we were unable to recover it. 00:31:32.043 [2024-11-19 11:25:40.161576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.161585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.161796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.161806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.162115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.162126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.162459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.162469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.162664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.162674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.163003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.163014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.163333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.163343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.163650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.163661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.164000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.164010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.164343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.164354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.164641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.164651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.164974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.164985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.165146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.165155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.165505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.165514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.165749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.165758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.166071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.166081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.166390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.166401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.166692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.166702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.167002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.167012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.167312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.167321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.167611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.167620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.167923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.167936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.168156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.168166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.168481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.168491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.168664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.168674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.169011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.169022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.169169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.169178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.169363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.169372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.169676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.169685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.170013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.170023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.170317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.170327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.170537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.170547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.170893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.170903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.044 [2024-11-19 11:25:40.171080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.044 [2024-11-19 11:25:40.171089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.044 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.171295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.171305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.171598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.171607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.171934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.171944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.172236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.172246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.172411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.172420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.172609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.172618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.172927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.172938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.173255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.173265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.173584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.173594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.173910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.173920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.174210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.174219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.174491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.174500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.174675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.174684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.174975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.174985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.175367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.175377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.175561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.175571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.175777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.175787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.175970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.175980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.176127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.176136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.176452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.176461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.176586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.176596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.176884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.176894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.177072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.177082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.177436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.177446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.177607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.177617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.177899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.177910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.178093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.178103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.178459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.178469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.178773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.178784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.178975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.178986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.179348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.179358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.179647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.179657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.179945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.179956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.180135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.180145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.180516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.180525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.180789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.180799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.181122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.181132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.181427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.045 [2024-11-19 11:25:40.181436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.045 qpair failed and we were unable to recover it. 00:31:32.045 [2024-11-19 11:25:40.181769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.181780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.181979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.181989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.182180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.182191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.182512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.182522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.182770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.182781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.183041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.183051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.183234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.183244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.183470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.183481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.183796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.183806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.184102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.184113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.184401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.184411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.184729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.184739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.184929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.184939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.185170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.185181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.185489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.185499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.185660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.185670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.185977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.185987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.186311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.186323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.186542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.186552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.186744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.186755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.186907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.186917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.187301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.187311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.187491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.187502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.187547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.187558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.187906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.187915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.188242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.188252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.188457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.188467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.188794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.188804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.188968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.188978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.189336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.189346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.189630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.189640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.189818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.189829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.190004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.190014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.190189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.190198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.190548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.190558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.190874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.190884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.191191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.191201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.191504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.046 [2024-11-19 11:25:40.191513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.046 qpair failed and we were unable to recover it. 00:31:32.046 [2024-11-19 11:25:40.191823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.191833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.192138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.192148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.192335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.192346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.192632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.192642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.192832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.192843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.193032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.193042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.193230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.193241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.193588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.193598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.193942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.193952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.194254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.194263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.194599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.194608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.194901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.194912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.195222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.195232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.195544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.195554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.195717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.195727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.196020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.196031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.196342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.196353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.196661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.196671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.196859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.196874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.197220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.197230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.197519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.197532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.197705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.197715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.197954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.197965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.198274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.198284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.198675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.198685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.198976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.198987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.199196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.199207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.199383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.199393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.199580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.199590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.199940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.199950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.200280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.200290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.200587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.200597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.200899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.200910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.201125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.201135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.201433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.201443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.201730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.201739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.202036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.202046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.202370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.202380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.047 [2024-11-19 11:25:40.202689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.047 [2024-11-19 11:25:40.202699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.047 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.203012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.203023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.203212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.203221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.203582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.203592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.203757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.203768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.203958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.203968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.204152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.204161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.204491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.204501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.204806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.204815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.204908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.204920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.205213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.205222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.205398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.205408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.205698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.205708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.206006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.206016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.206359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.206368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.206523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.206533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.206887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.206897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.207220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.207230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.207388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.207398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.207752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.207761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.208064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.208074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.208345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.208355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.208518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.208528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.208714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.208725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.209016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.209026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.209347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.209357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.209678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.209688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.209888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.209898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.210121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.210132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.210513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.210523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.210867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.210878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.211224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.211234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.211550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.211560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.211847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.211857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.212205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.212215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.212376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.212386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.212662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.212672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.213004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.213014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.213374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.048 [2024-11-19 11:25:40.213384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.048 qpair failed and we were unable to recover it. 00:31:32.048 [2024-11-19 11:25:40.213578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.213594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.213928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.213938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.214149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.214158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.214462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.214473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.214784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.214793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.214977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.214989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.215279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.215289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.215642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.215652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.215975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.215985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.216312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.216322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.216648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.216658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.216991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.217003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.217298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.217308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.217511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.217523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.217832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.217842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.218186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.218197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.218386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.218404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.218728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.218739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.218973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.218983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.219180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.219191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.219529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.219538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.219916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.219926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.220261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.220271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.220587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.220596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.220758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.220769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.221130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.221140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.221441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.221451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.221738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.221747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.222040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.222051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.222422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.222432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.222720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.222731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.223057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.223067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.223388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.223398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.049 qpair failed and we were unable to recover it. 00:31:32.049 [2024-11-19 11:25:40.223571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.049 [2024-11-19 11:25:40.223581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.223945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.223955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.224254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.224264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.224335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.224344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.224682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.224692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.224972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.224985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.225198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.225208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.225503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.225514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.225837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.225847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.226217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.226228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.226386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.226396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.226443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.226454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.226499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.226509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.226808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.226818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.227223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.227234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.227506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.227515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.227808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.227818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.227990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.228000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.228228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.228238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.228583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.228594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.228882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.228893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.229195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.229204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.229493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.229503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.229811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.229821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.230123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.230134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.230426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.230437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.230716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.230726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.231049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.231059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.231369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.231378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.231702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.231713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.231936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.231947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.232284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.232295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.232629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.232640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.232790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.232800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.233079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.233089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.233135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.233145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.233302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.233312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.233516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.233526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.050 [2024-11-19 11:25:40.233723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.050 [2024-11-19 11:25:40.233735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.050 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.233917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.233927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.234221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.234230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.234518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.234528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.234866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.234878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.234966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.234977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.235297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.235307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.235589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.235599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.235882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.235895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.236267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.236277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.236569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.236579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.236889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.236899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.237286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.237296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.237603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.237613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.237901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.237912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.237962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.237971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.238162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.238171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.238578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.238587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.238904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.238914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.239098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.239108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.239498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.239508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.239699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.239708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.239923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.239934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.240325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.240335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.240561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.240571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.240890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.240900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.240950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.240959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.241270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.241280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.241609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.241619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.241802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.241812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.241979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.241991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.242327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.242337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.242661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.242672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.242838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.242849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.243051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.243061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.243339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.243350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.243522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.243532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.243692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.243704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.243903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.051 [2024-11-19 11:25:40.243913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.051 qpair failed and we were unable to recover it. 00:31:32.051 [2024-11-19 11:25:40.244224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.244234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.244542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.244552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.244869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.244880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.245078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.245087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.245431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.245441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.245749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.245759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.246078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.246088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.246258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.246268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.246313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.246322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.246643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.246655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.246974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.246984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.247285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.247294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.247466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.247475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.247831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.247840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.248016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.248026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.248337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.248347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.248646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.248656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.248960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.248971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.249199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.249209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.249519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.249529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.249704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.249714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.249886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.249897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.250194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.250204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.250521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.250530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.250954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.250964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.251312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.251322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.251613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.251623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.252012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.252022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.252320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.252330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.252659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.252669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.253011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.253021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.253414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.253424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.253739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.253748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.254033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.254043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.254454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.254464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.254647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.254656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.254866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.254876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.255042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.052 [2024-11-19 11:25:40.255053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.052 qpair failed and we were unable to recover it. 00:31:32.052 [2024-11-19 11:25:40.255376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.255386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.255706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.255717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.256031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.256041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.256341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.256351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.256710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.256720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.256765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.256774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.257066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.257077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.257398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.257408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.257690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.257699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.257910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.257920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.258244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.258254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.258400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.258410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.258651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.258661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.258708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.258716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.259041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.259051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.259226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.259237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.259513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.259523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.259833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.259842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.260067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.260077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.260264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.260274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.260670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.260680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.260969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.260980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.261279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.261289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.261587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.261598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.261766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.261777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.262019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.262030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.262335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.262345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.262520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.262531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.262751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.262762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.263066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.263076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.263299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.263309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.263505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.263514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.263724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.263734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.264038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.264049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.053 qpair failed and we were unable to recover it. 00:31:32.053 [2024-11-19 11:25:40.264387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.053 [2024-11-19 11:25:40.264397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.264738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.264748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.265081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.265093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.265444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.265453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.265743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.265753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.265806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.265815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.266100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.266112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.266462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.266472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.266746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.266755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.267004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.267015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.267313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.267322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.267367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.267377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.267665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.267675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.267988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.267998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.268168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.268178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.268579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.268589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.268904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.268914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.269250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.269259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.269576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.269586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.269874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.269885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.270173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.270183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.270503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.270514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.270689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.270699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.270917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.270927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.271102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.271112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.271496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.271505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.271801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.271810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.271983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.271993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.272240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.272249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.272446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.272456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.272811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.272821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.272984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.272994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.273352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.273361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.273647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.273658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.274014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.274024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.274344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.274354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.274529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.274539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.274836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.274847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.054 qpair failed and we were unable to recover it. 00:31:32.054 [2024-11-19 11:25:40.275156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.054 [2024-11-19 11:25:40.275166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.275469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.275478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.275782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.275792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.276150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.276161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.276484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.276494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.276687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.276703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.276936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.276946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.277296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.277306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.277635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.277645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.277958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.277968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.278145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.278155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.278425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.278436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.278487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.278498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.278782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.278791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.278978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.278988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.279277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.279286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.279580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.279590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.279887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.279896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.280216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.280226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.280400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.280410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.280636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.280645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.280853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.280869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.281181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.281190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.281525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.281536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.281859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.281874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.282214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.282224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.282413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.282423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.282709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.282718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.283021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.283031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.283413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.283422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.283586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.283597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.283642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.283653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.283834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.283844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.284031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.284042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.284361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.284371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.284690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.284700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.284917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.284929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.285264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.285274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.055 [2024-11-19 11:25:40.285586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.055 [2024-11-19 11:25:40.285596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.055 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.285930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.285940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.286159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.286169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.286537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.286548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.286866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.286876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.287185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.287195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.287362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.287372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.287584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.287593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.287798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.287808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.288128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.288138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.288495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.288506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.288547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.288556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.288837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.288847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.288986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.288997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.289170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.289180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.289458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.289468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.289685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.289695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.290037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.290047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.290358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.290368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.290688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.290698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.291013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.291023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.291182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.291193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.291587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.291597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.291769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.291779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.291949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.291960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.292270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.292281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.292474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.292484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.292645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.292654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.292950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.292960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.293173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.293183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.293480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.293489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.293802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.293812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.294004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.294014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.294265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.294275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.294484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.294493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.294668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.294677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.295096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.295106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.295419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.295428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.295597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.056 [2024-11-19 11:25:40.295608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.056 qpair failed and we were unable to recover it. 00:31:32.056 [2024-11-19 11:25:40.295898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.295909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.296227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.296237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.296481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.296491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.296776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.296787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.296954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.296964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.297257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.297267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.297599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.297609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.297775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.297784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.298079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.298089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.298388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.298398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.298589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.298600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.298790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.298799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.299167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.299177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.299346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.299357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.299639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.299649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.299950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.299960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.300244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.300254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.300567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.300576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.300906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.300916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.301271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.301281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.301607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.301618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.301908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.301919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.302241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.302251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.302634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.302644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.302916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.302927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.303220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.303230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.303396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.303408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.303736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.303751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.304090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.304100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.304354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.304364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.304693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.304703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.305158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.305169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.305343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.305354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.305651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.305661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.305821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.305830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.306223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.306233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.306527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.306538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.306839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.057 [2024-11-19 11:25:40.306849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.057 qpair failed and we were unable to recover it. 00:31:32.057 [2024-11-19 11:25:40.307165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.307176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.307476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.307486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.307648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.307658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.308047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.308058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.308230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.308240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.308547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.308557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.308764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.308782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.309142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.309153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.309329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.309339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.309648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.309657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.309849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.309860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.310176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.310186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.310372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.310382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.310763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.310773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.311084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.311095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.311267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.311276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.311507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.311519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.311867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.311878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.312243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.312253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.312560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.312571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.312752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.312763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.312956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.312967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.313308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.313317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.313606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.313616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.313776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.313787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.314161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.314172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.314498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.314509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.314819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.314829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.314998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.315009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.315310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.315321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.315456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.315468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.315743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.315753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.315943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.315954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.058 qpair failed and we were unable to recover it. 00:31:32.058 [2024-11-19 11:25:40.316350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.058 [2024-11-19 11:25:40.316359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.316635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.316645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.316991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.317001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.317326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.317336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.317546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.317557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.317742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.317753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.318086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.318098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.318381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.318391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.318555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.318565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.318820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.318831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.319186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.319197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.319522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.319532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.319832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.319843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.320170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.320180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.320604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.320613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.320913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.320925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.321237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.321248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.321412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.321422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.321816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.321826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.322117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.322127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.322287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.322297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.322659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.322669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.322959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.322970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.323182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.323193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.323478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.323491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.323828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.323838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.324177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.324188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.324408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.324419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.324607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.324617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.324843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.324853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.325158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.325169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.325499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.325509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.325814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.325824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.326032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.326050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.326386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.326396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.326682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.326692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.327012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.327022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.327327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.059 [2024-11-19 11:25:40.327337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.059 qpair failed and we were unable to recover it. 00:31:32.059 [2024-11-19 11:25:40.327660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.327671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.327994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.328005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.328360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.328370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.328633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.328643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.328943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.328954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.329265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.329276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.329582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.329592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.329913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.329923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.330252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.330262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.330579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.330589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.330893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.330903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.331211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.331221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.331535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.331546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.331703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.331721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.332022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.332033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.332107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.332115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.332412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.332422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.332710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.332721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.333029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.333039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.333206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.333216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.333621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.333631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.333926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.333937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.334291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.334301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.334491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.334508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.334701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.334711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.334887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.334898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.335100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.335110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.335474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.335484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.335802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.335813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.335978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.335989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.336322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.336333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.336677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.336687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.336911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.336921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.336967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.336976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.337329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.337339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.337646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.337657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.337837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.337847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.338052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.060 [2024-11-19 11:25:40.338063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.060 qpair failed and we were unable to recover it. 00:31:32.060 [2024-11-19 11:25:40.338349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.338359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.338683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.338693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.338859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.338875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.339194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.339204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.339433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.339444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.339736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.339746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.339935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.339946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.340275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.340285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.340572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.340583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.340892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.340902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.341298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.341308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.341542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.341551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.341896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.341907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.342088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.342098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.342327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.342338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.342599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.342610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.342929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.342942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.343271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.343281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.343442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.343452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.343761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.343771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.344053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.344063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.344235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.344246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.344488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.344498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.344790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.344799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.345190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.345200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.345502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.345512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.345807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.345818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.346103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.346114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.346309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.346321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.346548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.346558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.346870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.346881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.347172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.347182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.347338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.347348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.347552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.347563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.347854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.347869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.348159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.348169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.348488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.348498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.348818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.061 [2024-11-19 11:25:40.348828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.061 qpair failed and we were unable to recover it. 00:31:32.061 [2024-11-19 11:25:40.349073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.349083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.349430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.349439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.349640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.349827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.349837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.350139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.350149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.350455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.350464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.350755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.350764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.351057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.351067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.351239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.351249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.351413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.351422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.351743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.351753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.351933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.351943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.352128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.352138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.352536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.352546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.352849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.352858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.353156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.353165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.353449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.353458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.353766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.353775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.353821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.353830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.354124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.354134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.354424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.354435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.354737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.354748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.355020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.355031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.355357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.355366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.355553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.355564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.355893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.355903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.356264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.356274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.356443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.356453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.356741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.356751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.357076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.357087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.357290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.357301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.357638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.357648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.357955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.357966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.358340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.358350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.358659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.358669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.358854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.358866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.359076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.359086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.359428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.359437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.062 [2024-11-19 11:25:40.359735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.062 [2024-11-19 11:25:40.359745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.062 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.360048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.360059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.360324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.360334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.360522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.360531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.360852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.360865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.361040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.361050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.361375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.361385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.361743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.361753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.362046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.362061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.362262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.362272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.362463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.362472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.063 [2024-11-19 11:25:40.362792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.063 [2024-11-19 11:25:40.362802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.063 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.363155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.363166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.363292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.363303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.363341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1b020 (9): Bad file descriptor 00:31:32.333 [2024-11-19 11:25:40.363718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.363812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e0000b90 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.364242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.364331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e0000b90 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.364764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.364800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e0000b90 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.365164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.365176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.365563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.365573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.365867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.365878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.366047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.366058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.366110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.366124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.366333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.366344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.366529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.366540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.366921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.366933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.367272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.333 [2024-11-19 11:25:40.367282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.333 qpair failed and we were unable to recover it. 00:31:32.333 [2024-11-19 11:25:40.367609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.367618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.367944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.367954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.368275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.368285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.368593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.368603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.368924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.368934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.369116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.369125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.369310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.369319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.369627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.369636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.369858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.369874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.370315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.370325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.370625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.370635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.370826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.370835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.371032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.371042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.371208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.371218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.371404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.371415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.371707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.371717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.371882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.371892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.372141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.372150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.372467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.372477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.372767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.372777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.373087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.373096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.373314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.373323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.373680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.373689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.373986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.373996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.374224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.374234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.374562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.374572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.374890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.374900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.375292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.375301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.375682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.375691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.376009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.376018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.376185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.376194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.376466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.376475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.376764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.376773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.377193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.377203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.377538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.334 [2024-11-19 11:25:40.377548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.334 qpair failed and we were unable to recover it. 00:31:32.334 [2024-11-19 11:25:40.377901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.377911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.378132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.378144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.378507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.378516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.378807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.378816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.379206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.379216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.379373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.379383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.379666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.379675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.379988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.379998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.380165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.380175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.380428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.380437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.380751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.380760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.380937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.380946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.381001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.381011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.381318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.381328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.381626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.381636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.381981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.381991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.382368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.382377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.382691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.382700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.382846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.382856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.383253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.383262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.383605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.383614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.383927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.383937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.384104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.384114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.384443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.384452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.384793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.384802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.385148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.385158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.385477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.385487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.385797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.385807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 [2024-11-19 11:25:40.386237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.386249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.335 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.335 [2024-11-19 11:25:40.386408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.335 [2024-11-19 11:25:40.386419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.335 qpair failed and we were unable to recover it. 00:31:32.336 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:32.336 [2024-11-19 11:25:40.386702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.386712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.386878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.386888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:32.336 [2024-11-19 11:25:40.387072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.387083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.336 [2024-11-19 11:25:40.387253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.387262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.336 [2024-11-19 11:25:40.387479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.387490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.387804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.387814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.388127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.388137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.388424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.388434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.388720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.388730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.389034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.389044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.389373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.389384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.389719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.389730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.389957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.389968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.390171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.390180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.390365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.390375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.390565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.390575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.390876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.390886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.390926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.390935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.391234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.391243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.391576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.391586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.391901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.391913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.392234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.392245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.392480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.392490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.392810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.392820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.393163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.393174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.393499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.393508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.393800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.393810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.394029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.394039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.394362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.394372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.394687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.394698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.394883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.394894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.395121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.395131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.395395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.395405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.395757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.395767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.395927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.395938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.396197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.336 [2024-11-19 11:25:40.396208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.336 qpair failed and we were unable to recover it. 00:31:32.336 [2024-11-19 11:25:40.396512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.396521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.396858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.396871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.397104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.397122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.397417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.397426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.397738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.397747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.397921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.397932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.398139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.398149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.398417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.398428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.398716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.398726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.398992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.399002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.399179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.399188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.399572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.399582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.399892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.399904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.400231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.400241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.400435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.400453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.400659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.400669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.400850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.400860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.400947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.400958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.401297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.401307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.401637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.401647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.401844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.401854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.402046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.402056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.402418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.402428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.402711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.402722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.403032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.403043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.403119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.403129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.403302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.403311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.403621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.403631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.403931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.403944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.404257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.404266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.404532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.404542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.404878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.404889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.405266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.405276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.405625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.405635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.405919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.405929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.406112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.406122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.337 qpair failed and we were unable to recover it. 00:31:32.337 [2024-11-19 11:25:40.406400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.337 [2024-11-19 11:25:40.406410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.406771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.406781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.407166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.407177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.407361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.407372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.407629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.407640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.407925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.407936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.408152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.408162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.408321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.408331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.408530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.408540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.408824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.408834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.409175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.409185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.409589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.409599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.409910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.409920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.410224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.410235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.410530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.410540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.410853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.410866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.411205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.411216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.411405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.411417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.411741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.411751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.411906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.411916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.412224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.412235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.412535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.412544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.412710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.412720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.413031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.413041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.413429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.413438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.413726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.413736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.414040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.414050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.414210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.414219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.414371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.414381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.414677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.414688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.414881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.414892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.415264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.415274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.338 qpair failed and we were unable to recover it. 00:31:32.338 [2024-11-19 11:25:40.415580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.338 [2024-11-19 11:25:40.415591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.415798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.415811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.416127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.416138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.416442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.416452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.416654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.416663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.416815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.416824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.417107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.417118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.417425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.417435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.417626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.417636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.417913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.417923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.418250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.418260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.418405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.418414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.418598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.418607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.418931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.418941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.419237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.419246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.419577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.419587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.419891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.419902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.420276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.420287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.420606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.420617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.420950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.420960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.421257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.421267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.421475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.421485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.421827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.421837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.422247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.422258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.422425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.422435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.422612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.422622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.422902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.422912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.423220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.423230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.423439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.423451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.423749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.423759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.424052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.424062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.424359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.424369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.424682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.424693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.424859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.424874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.425071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.425081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.425357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.425366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.425696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.425706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.425919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.425929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.339 [2024-11-19 11:25:40.426209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.339 [2024-11-19 11:25:40.426218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.339 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.426419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.426429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.426746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.426756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.426924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.426934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.427149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.427159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.340 [2024-11-19 11:25:40.427530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.427541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:32.340 [2024-11-19 11:25:40.427847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.427858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.340 [2024-11-19 11:25:40.428163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.428174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.340 [2024-11-19 11:25:40.428543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.428554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.428870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.428880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.429186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.429196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.429515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.429526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.429850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.429860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.430166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.430176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.430224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.430235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.430530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.430539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.430701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.430712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.430934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.430944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.431009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.431018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.431351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.431360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.431632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.431642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.431812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.431821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.432118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.432128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.432453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.432463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.432771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.432781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.433109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.433119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.433429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.433439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.433774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.433784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.434088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.434098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.434272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.434285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.434612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.434622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.434809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.434819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.435211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.435221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.435543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.435552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.435759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.435775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.436125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.436135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.436426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.340 [2024-11-19 11:25:40.436436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.340 qpair failed and we were unable to recover it. 00:31:32.340 [2024-11-19 11:25:40.436814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.436823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.437001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.437012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.437431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.437440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.437726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.437736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.437911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.437921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.438297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.438307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.438600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.438610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.438921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.438931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.439261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.439270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.439413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.439422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.439803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.439812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.439983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.439993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.440326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.440335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.440684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.440693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.440948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.440959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.441293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.441303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.441631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.441833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.441843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.442160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.442170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.442460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.442473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.442810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.442821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.443141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.443152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.443459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.443469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.443758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.443767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.444059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.444068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.444359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.444369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.444674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.444683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.444870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.444880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.445023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.445032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.445224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.445234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.445578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.445588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.445742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.445752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.446082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.446092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.446415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.446425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.446732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.446742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.447040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.447049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.447201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.447212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.447571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.341 [2024-11-19 11:25:40.447581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.341 qpair failed and we were unable to recover it. 00:31:32.341 [2024-11-19 11:25:40.447891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.447901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.448219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.448228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.448534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.448544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.448708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.448718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.448884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.448894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.449180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.449190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.449461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.449471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.449648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.449658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.450077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.450087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.450407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.450417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.450692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.450701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.450917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.450928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.451163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.451173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.451543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.451553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.451841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.451851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.452232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.452243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.452537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.452547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.452712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.452722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.452880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.453080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.453090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1e490 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.453581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.453611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.453823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.453832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.454310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.454342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.454651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.454659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.454857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.454869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.455176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.455204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.455581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.455590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.456055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.456083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.456467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.456477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.456844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.456851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.457111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.457139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.457456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.457465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.457623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.457630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.457904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.457911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.458226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.458234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.458667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.458674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 Malloc0 00:31:32.342 [2024-11-19 11:25:40.458977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.458985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 [2024-11-19 11:25:40.459385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.342 [2024-11-19 11:25:40.459391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.342 qpair failed and we were unable to recover it. 00:31:32.342 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.343 [2024-11-19 11:25:40.459688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.459696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.459764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.459770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.459861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.459872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:32.343 [2024-11-19 11:25:40.460175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.460182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.343 [2024-11-19 11:25:40.460401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.460409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.460464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.460471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.343 [2024-11-19 11:25:40.460758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.460765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.461133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.461140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.461349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.461356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.461666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.461673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.461890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.461897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.462218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.462225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.462407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.462413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.462592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.462599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.462953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.462960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.463306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.463313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.463483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.463490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.463775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.463783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.463935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.463942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.464256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.464263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.464442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.464449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.464739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.464746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.465050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.465059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.465254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.465261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.465555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.465561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.465858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.465868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.466044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.466052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.466316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.343 [2024-11-19 11:25:40.466351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.466358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.466670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.466677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.466989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.466996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.467306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.343 [2024-11-19 11:25:40.467313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.343 qpair failed and we were unable to recover it. 00:31:32.343 [2024-11-19 11:25:40.467680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.467687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.468042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.468049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.468372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.468379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.468461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.468467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.468739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.468745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.469002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.469009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.469306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.469319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.469618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.469625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.469851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.469858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.470059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.470065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.470221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.470227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.470529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.470536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.470722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.470730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.471033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.471040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.471368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.471375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.471698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.471704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.472016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.472023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.472198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.472204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.472518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.472525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.472820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.472826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.472994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.473001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.473232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.473238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.473617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.473625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.473835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.473842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.474144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.474151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.474442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.474450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.474640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.474648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.474818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.474825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.475003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.475010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.475063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.475069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.475256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.475263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.344 [2024-11-19 11:25:40.475484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.475492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 [2024-11-19 11:25:40.475712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.475718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:32.344 [2024-11-19 11:25:40.475925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.475932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.344 [2024-11-19 11:25:40.476211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.476218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.344 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.344 [2024-11-19 11:25:40.476400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.344 [2024-11-19 11:25:40.476408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.344 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.476650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.476657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.477016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.477023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.477334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.477341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.477632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.477639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.478002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.478009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.478385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.478392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.478584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.478591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.478905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.478912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.479097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.479103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.479413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.479419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.479719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.479726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.479911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.479918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.480177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.480184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.480381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.480387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.480682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.480689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.481025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.481032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.481193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.481201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.481484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.481491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.481660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.481667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.481871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.481878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.481991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.481998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.482312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.482319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.482654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.482661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.482706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.482712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.483074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.483081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.483394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.483400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.483689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.483696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.483880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.483887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.484193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.484200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.484504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.484510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.484833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.484840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.485138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.485146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.485308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.485315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.485590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.485599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.485941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.485948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.486262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.486269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.486614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.486621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.345 qpair failed and we were unable to recover it. 00:31:32.345 [2024-11-19 11:25:40.486813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.345 [2024-11-19 11:25:40.486819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.487061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.487068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.487256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.487265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.346 [2024-11-19 11:25:40.487435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.487443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:32.346 [2024-11-19 11:25:40.487797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.487805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.487971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.487977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.346 [2024-11-19 11:25:40.488252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.488260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.346 [2024-11-19 11:25:40.488478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.488486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.488683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.488690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.489016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.489023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.489343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.489349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.489651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.489658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.489831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.489838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.489996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.490003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.490180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.490187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.490361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.490368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.490536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.490542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.490858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.490868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.491020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.491026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.491246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.491253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.491550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.491557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.491739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.491746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.492055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.492063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.492386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.492392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.492550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.492557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.492936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.492943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.493260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.493267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.493436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.493444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.493637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.493643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.493834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.493842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.494070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.494077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.494253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.494261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.494443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.494450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.494608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.494614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.494983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.494990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.495149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.495155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.346 qpair failed and we were unable to recover it. 00:31:32.346 [2024-11-19 11:25:40.495465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.346 [2024-11-19 11:25:40.495471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.495823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.495830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.496211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.496219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.496388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.496395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.496436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.496443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.496727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.496734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.497056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.497063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.497240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.497247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.497537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.497544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.497871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.497879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.498070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.498077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.498380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.498387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.498789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.498796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.499115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.499122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.347 [2024-11-19 11:25:40.499493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.499500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.347 [2024-11-19 11:25:40.499807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.499815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.499883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.499891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.347 [2024-11-19 11:25:40.500177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.500184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.347 [2024-11-19 11:25:40.500382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.500390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.500694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.500700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.501062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.501069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.501273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.501280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.501472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.501479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.501679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.501687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.502025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.502031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.502327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.502333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.502627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.502634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.502909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.502917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.503221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.503227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.503521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.503527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.503932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.503939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.347 qpair failed and we were unable to recover it. 00:31:32.347 [2024-11-19 11:25:40.504289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.347 [2024-11-19 11:25:40.504296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.504450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.504457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.504528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.504535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.504718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.504725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.505010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.505017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.505310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.505317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.505651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.505659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.505893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.505900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.506052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.506059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.506225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.506231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.506546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.348 [2024-11-19 11:25:40.506553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.506562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.348 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.348 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:32.348 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.348 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.348 [2024-11-19 11:25:40.517252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.348 [2024-11-19 11:25:40.517316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.348 [2024-11-19 11:25:40.517328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.348 [2024-11-19 11:25:40.517334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.348 [2024-11-19 11:25:40.517339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.348 [2024-11-19 11:25:40.517353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.348 11:25:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 146157 00:31:32.348 [2024-11-19 11:25:40.527154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.348 [2024-11-19 11:25:40.527208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.348 [2024-11-19 11:25:40.527219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.348 [2024-11-19 11:25:40.527224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.348 [2024-11-19 11:25:40.527231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.348 [2024-11-19 11:25:40.527242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.537210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.348 [2024-11-19 11:25:40.537259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.348 [2024-11-19 11:25:40.537269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.348 [2024-11-19 11:25:40.537274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.348 [2024-11-19 11:25:40.537278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.348 [2024-11-19 11:25:40.537289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.547190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.348 [2024-11-19 11:25:40.547246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.348 [2024-11-19 11:25:40.547256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.348 [2024-11-19 11:25:40.547261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.348 [2024-11-19 11:25:40.547265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.348 [2024-11-19 11:25:40.547275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.557033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.348 [2024-11-19 11:25:40.557085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.348 [2024-11-19 11:25:40.557096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.348 [2024-11-19 11:25:40.557100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.348 [2024-11-19 11:25:40.557105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.348 [2024-11-19 11:25:40.557115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.348 qpair failed and we were unable to recover it. 00:31:32.348 [2024-11-19 11:25:40.567029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.348 [2024-11-19 11:25:40.567092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.348 [2024-11-19 11:25:40.567102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.567106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.567111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.567121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.577191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.577282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.577291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.577296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.577301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.577311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.587194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.587257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.587267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.587272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.587276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.587286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.597147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.597202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.597211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.597216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.597221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.597231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.607165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.607220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.607230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.607235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.607239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.607249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.617237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.617342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.617354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.617359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.617363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.617373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.627320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.627372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.627381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.627386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.627390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.627400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.637333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.637379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.637389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.637394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.637398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.637407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.647231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.647294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.647304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.647309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.647313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.647323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.657382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.657434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.657444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.657451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.657456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.657466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.349 [2024-11-19 11:25:40.667415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.349 [2024-11-19 11:25:40.667471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.349 [2024-11-19 11:25:40.667480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.349 [2024-11-19 11:25:40.667485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.349 [2024-11-19 11:25:40.667489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.349 [2024-11-19 11:25:40.667499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.349 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.677464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.677518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.677529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.677534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.677539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.677549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.687460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.687509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.687520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.687525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.687529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.687539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.697511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.697588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.697598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.697603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.697607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.697620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.707517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.707566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.707576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.707581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.707586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.707596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.717461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.717511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.717521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.717526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.717530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.717540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.727585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.727630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.727643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.727648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.727652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.727663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.737744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.737799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.737809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.737814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.737818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.737828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.747707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.747795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.747804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.747809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.747814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.747823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.757713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.757765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.757775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.757780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.757784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.757794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.767730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.767775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.767785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.767790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.767794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.767804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.777701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.612 [2024-11-19 11:25:40.777750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.612 [2024-11-19 11:25:40.777760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.612 [2024-11-19 11:25:40.777765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.612 [2024-11-19 11:25:40.777769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.612 [2024-11-19 11:25:40.777779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.612 qpair failed and we were unable to recover it. 00:31:32.612 [2024-11-19 11:25:40.787751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.787802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.787812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.787819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.787824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.787834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.797664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.797721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.797732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.797737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.797741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.797752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.807807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.807853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.807866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.807872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.807876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.807887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.817758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.817825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.817835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.817840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.817844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.817854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.827840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.827896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.827906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.827911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.827915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.827928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.837907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.837961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.837970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.837975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.837979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.837989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.847912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.847963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.847972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.847976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.847981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.847990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.857820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.857871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.857881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.857886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.857890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.857901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.867890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.867946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.867956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.867960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.867965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.867975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.878020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.878069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.878079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.878084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.878088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.878098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.888006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.888054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.888065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.888070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.888074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.888084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.898088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.898133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.898143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.898148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.898152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.898162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.613 [2024-11-19 11:25:40.908121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.613 [2024-11-19 11:25:40.908175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.613 [2024-11-19 11:25:40.908185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.613 [2024-11-19 11:25:40.908190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.613 [2024-11-19 11:25:40.908194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.613 [2024-11-19 11:25:40.908204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.613 qpair failed and we were unable to recover it. 00:31:32.614 [2024-11-19 11:25:40.918120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.614 [2024-11-19 11:25:40.918172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.614 [2024-11-19 11:25:40.918184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.614 [2024-11-19 11:25:40.918189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.614 [2024-11-19 11:25:40.918193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.614 [2024-11-19 11:25:40.918203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.614 qpair failed and we were unable to recover it. 00:31:32.614 [2024-11-19 11:25:40.928173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.614 [2024-11-19 11:25:40.928224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.614 [2024-11-19 11:25:40.928234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.614 [2024-11-19 11:25:40.928238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.614 [2024-11-19 11:25:40.928243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.614 [2024-11-19 11:25:40.928252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.614 qpair failed and we were unable to recover it. 00:31:32.614 [2024-11-19 11:25:40.938148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.614 [2024-11-19 11:25:40.938193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.614 [2024-11-19 11:25:40.938202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.614 [2024-11-19 11:25:40.938207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.614 [2024-11-19 11:25:40.938212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.614 [2024-11-19 11:25:40.938221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.614 qpair failed and we were unable to recover it. 00:31:32.614 [2024-11-19 11:25:40.948210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.614 [2024-11-19 11:25:40.948258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.614 [2024-11-19 11:25:40.948268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.614 [2024-11-19 11:25:40.948273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.614 [2024-11-19 11:25:40.948277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.614 [2024-11-19 11:25:40.948287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.614 qpair failed and we were unable to recover it. 00:31:32.614 [2024-11-19 11:25:40.958240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.614 [2024-11-19 11:25:40.958297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.614 [2024-11-19 11:25:40.958306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.614 [2024-11-19 11:25:40.958311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.614 [2024-11-19 11:25:40.958318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.614 [2024-11-19 11:25:40.958328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.614 qpair failed and we were unable to recover it. 00:31:32.876 [2024-11-19 11:25:40.968173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:40.968225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:40.968235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:40.968240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:40.968244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:40.968254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:40.978352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:40.978404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:40.978413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:40.978418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:40.978422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:40.978432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:40.988327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:40.988374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:40.988383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:40.988388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:40.988393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:40.988403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:40.998236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:40.998294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:40.998303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:40.998308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:40.998312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:40.998322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.008253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.008351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.008361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.008366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.008371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:41.008381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.018399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.018443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.018453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.018458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.018462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:41.018472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.028297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.028349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.028360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.028365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.028369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:41.028380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.038417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.038482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.038492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.038497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.038501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:41.038511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.048478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.048521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.048534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.048538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.048543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:41.048553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.058501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.058552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.058562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.058567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.058571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:41.058581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.068424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.068524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.068534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.068539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.068544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:41.068553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.078462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.078516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.078525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.078530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.078534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.877 [2024-11-19 11:25:41.078544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.877 qpair failed and we were unable to recover it. 00:31:32.877 [2024-11-19 11:25:41.088622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.877 [2024-11-19 11:25:41.088675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.877 [2024-11-19 11:25:41.088685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.877 [2024-11-19 11:25:41.088690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.877 [2024-11-19 11:25:41.088696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.088706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.098647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.098695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.098713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.098719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.098723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.098737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.108680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.108732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.108743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.108748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.108752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.108763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.118709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.118762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.118786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.118791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.118795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.118810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.128620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.128717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.128727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.128732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.128737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.128747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.138630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.138684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.138695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.138700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.138705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.138715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.148746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.148798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.148808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.148813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.148818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.148828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.158822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.158878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.158889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.158894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.158898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.158908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.168833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.168893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.168903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.168908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.168912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.168922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.178866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.178915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.178928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.178933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.178937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.178948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.188901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.188987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.188997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.189002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.189006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.189016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.198930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.198977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.198986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.198991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.198995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.199006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.208952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.208999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.209009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.209014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.209018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.878 [2024-11-19 11:25:41.209029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.878 qpair failed and we were unable to recover it. 00:31:32.878 [2024-11-19 11:25:41.218956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:32.878 [2024-11-19 11:25:41.219002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:32.878 [2024-11-19 11:25:41.219012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:32.878 [2024-11-19 11:25:41.219019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:32.878 [2024-11-19 11:25:41.219024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:32.879 [2024-11-19 11:25:41.219034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:32.879 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.229007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.141 [2024-11-19 11:25:41.229062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.141 [2024-11-19 11:25:41.229071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.141 [2024-11-19 11:25:41.229076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.141 [2024-11-19 11:25:41.229080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.141 [2024-11-19 11:25:41.229091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.141 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.239041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.141 [2024-11-19 11:25:41.239090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.141 [2024-11-19 11:25:41.239099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.141 [2024-11-19 11:25:41.239104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.141 [2024-11-19 11:25:41.239108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.141 [2024-11-19 11:25:41.239118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.141 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.249008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.141 [2024-11-19 11:25:41.249055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.141 [2024-11-19 11:25:41.249065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.141 [2024-11-19 11:25:41.249070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.141 [2024-11-19 11:25:41.249074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.141 [2024-11-19 11:25:41.249084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.141 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.259074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.141 [2024-11-19 11:25:41.259134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.141 [2024-11-19 11:25:41.259144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.141 [2024-11-19 11:25:41.259149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.141 [2024-11-19 11:25:41.259153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.141 [2024-11-19 11:25:41.259166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.141 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.269093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.141 [2024-11-19 11:25:41.269182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.141 [2024-11-19 11:25:41.269192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.141 [2024-11-19 11:25:41.269197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.141 [2024-11-19 11:25:41.269201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.141 [2024-11-19 11:25:41.269211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.141 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.279141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.141 [2024-11-19 11:25:41.279190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.141 [2024-11-19 11:25:41.279200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.141 [2024-11-19 11:25:41.279205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.141 [2024-11-19 11:25:41.279210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.141 [2024-11-19 11:25:41.279219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.141 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.289151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.141 [2024-11-19 11:25:41.289216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.141 [2024-11-19 11:25:41.289226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.141 [2024-11-19 11:25:41.289233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.141 [2024-11-19 11:25:41.289237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.141 [2024-11-19 11:25:41.289247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.141 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.299196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.141 [2024-11-19 11:25:41.299249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.141 [2024-11-19 11:25:41.299258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.141 [2024-11-19 11:25:41.299264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.141 [2024-11-19 11:25:41.299268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.141 [2024-11-19 11:25:41.299278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.141 qpair failed and we were unable to recover it. 00:31:33.141 [2024-11-19 11:25:41.309228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.309315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.309325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.309330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.309334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.309344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.319274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.319326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.319336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.319341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.319345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.319355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.329284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.329362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.329372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.329377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.329381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.329391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.339317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.339363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.339372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.339377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.339382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.339392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.349343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.349397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.349406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.349414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.349418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.349428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.359371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.359424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.359433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.359438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.359443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.359453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.369381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.369425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.369435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.369440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.369444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.369454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.379413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.379486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.379496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.379501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.379505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.379515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.389511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.389565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.389574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.389579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.389584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.389596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.399477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.399557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.399566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.399571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.399576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.399585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.409491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.409535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.409544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.409549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.409554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.409564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.419558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.419646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.419656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.419661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.419666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.419675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.429557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.142 [2024-11-19 11:25:41.429605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.142 [2024-11-19 11:25:41.429615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.142 [2024-11-19 11:25:41.429620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.142 [2024-11-19 11:25:41.429624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.142 [2024-11-19 11:25:41.429635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.142 qpair failed and we were unable to recover it. 00:31:33.142 [2024-11-19 11:25:41.439584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.143 [2024-11-19 11:25:41.439636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.143 [2024-11-19 11:25:41.439646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.143 [2024-11-19 11:25:41.439651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.143 [2024-11-19 11:25:41.439656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.143 [2024-11-19 11:25:41.439666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.143 qpair failed and we were unable to recover it. 00:31:33.143 [2024-11-19 11:25:41.449628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.143 [2024-11-19 11:25:41.449672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.143 [2024-11-19 11:25:41.449681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.143 [2024-11-19 11:25:41.449686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.143 [2024-11-19 11:25:41.449690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.143 [2024-11-19 11:25:41.449701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.143 qpair failed and we were unable to recover it. 00:31:33.143 [2024-11-19 11:25:41.459506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.143 [2024-11-19 11:25:41.459557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.143 [2024-11-19 11:25:41.459567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.143 [2024-11-19 11:25:41.459572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.143 [2024-11-19 11:25:41.459576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.143 [2024-11-19 11:25:41.459586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.143 qpair failed and we were unable to recover it. 00:31:33.143 [2024-11-19 11:25:41.469670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.143 [2024-11-19 11:25:41.469724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.143 [2024-11-19 11:25:41.469734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.143 [2024-11-19 11:25:41.469739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.143 [2024-11-19 11:25:41.469743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.143 [2024-11-19 11:25:41.469753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.143 qpair failed and we were unable to recover it. 00:31:33.143 [2024-11-19 11:25:41.479699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.143 [2024-11-19 11:25:41.479749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.143 [2024-11-19 11:25:41.479761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.143 [2024-11-19 11:25:41.479766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.143 [2024-11-19 11:25:41.479770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.143 [2024-11-19 11:25:41.479780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.143 qpair failed and we were unable to recover it. 00:31:33.143 [2024-11-19 11:25:41.489739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.143 [2024-11-19 11:25:41.489783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.143 [2024-11-19 11:25:41.489793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.143 [2024-11-19 11:25:41.489798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.143 [2024-11-19 11:25:41.489802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.143 [2024-11-19 11:25:41.489812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.143 qpair failed and we were unable to recover it. 00:31:33.405 [2024-11-19 11:25:41.499736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.405 [2024-11-19 11:25:41.499788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.405 [2024-11-19 11:25:41.499797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.405 [2024-11-19 11:25:41.499802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.405 [2024-11-19 11:25:41.499807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.499817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.509747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.509797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.509807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.509811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.509816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.509825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.519842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.519897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.519907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.519912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.519919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.519929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.529812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.529865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.529875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.529880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.529884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.529894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.539855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.539909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.539919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.539924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.539928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.539938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.549918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.549973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.549983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.549988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.549992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.550002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.559923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.559972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.559981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.559986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.559990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.560000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.569946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.569997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.570007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.570012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.570016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.570026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.579860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.579955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.579965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.579971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.579975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.579985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.590005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.590086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.590096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.590101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.590105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.590115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.600042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.600138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.600148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.600153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.600157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.600167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.609921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.609966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.609979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.609984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.609988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.609998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.619966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.620013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.406 [2024-11-19 11:25:41.620023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.406 [2024-11-19 11:25:41.620027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.406 [2024-11-19 11:25:41.620032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.406 [2024-11-19 11:25:41.620042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.406 qpair failed and we were unable to recover it. 00:31:33.406 [2024-11-19 11:25:41.630039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.406 [2024-11-19 11:25:41.630086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.630095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.630100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.630105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.630115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.640172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.640230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.640239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.640244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.640248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.640258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.650175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.650218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.650227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.650232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.650239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.650249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.660219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.660269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.660279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.660284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.660288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.660298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.670107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.670158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.670169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.670174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.670178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.670188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.680273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.680324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.680334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.680340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.680344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.680354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.690330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.690397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.690406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.690411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.690416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.690426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.700312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.700375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.700385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.700390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.700395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.700405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.710234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.710296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.710307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.710311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.710316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.710327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.720387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.720438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.720448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.720453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.720458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.720468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.730413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.730512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.730522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.730527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.730531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.730541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.740399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.740453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.740463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.740468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.740472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.740482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.407 [2024-11-19 11:25:41.750471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.407 [2024-11-19 11:25:41.750547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.407 [2024-11-19 11:25:41.750557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.407 [2024-11-19 11:25:41.750562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.407 [2024-11-19 11:25:41.750567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.407 [2024-11-19 11:25:41.750576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.407 qpair failed and we were unable to recover it. 00:31:33.670 [2024-11-19 11:25:41.760497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.670 [2024-11-19 11:25:41.760547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.670 [2024-11-19 11:25:41.760558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.670 [2024-11-19 11:25:41.760563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.670 [2024-11-19 11:25:41.760567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.670 [2024-11-19 11:25:41.760577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.670 qpair failed and we were unable to recover it. 00:31:33.670 [2024-11-19 11:25:41.770415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.670 [2024-11-19 11:25:41.770462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.670 [2024-11-19 11:25:41.770473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.670 [2024-11-19 11:25:41.770478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.670 [2024-11-19 11:25:41.770482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.670 [2024-11-19 11:25:41.770492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.670 qpair failed and we were unable to recover it. 00:31:33.670 [2024-11-19 11:25:41.780509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.670 [2024-11-19 11:25:41.780554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.670 [2024-11-19 11:25:41.780564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.670 [2024-11-19 11:25:41.780572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.670 [2024-11-19 11:25:41.780577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.670 [2024-11-19 11:25:41.780587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.670 qpair failed and we were unable to recover it. 00:31:33.670 [2024-11-19 11:25:41.790568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.670 [2024-11-19 11:25:41.790617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.670 [2024-11-19 11:25:41.790627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.670 [2024-11-19 11:25:41.790632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.670 [2024-11-19 11:25:41.790637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.670 [2024-11-19 11:25:41.790647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.670 qpair failed and we were unable to recover it. 00:31:33.670 [2024-11-19 11:25:41.800502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.670 [2024-11-19 11:25:41.800552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.670 [2024-11-19 11:25:41.800562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.670 [2024-11-19 11:25:41.800566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.670 [2024-11-19 11:25:41.800571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.670 [2024-11-19 11:25:41.800581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.670 qpair failed and we were unable to recover it. 00:31:33.670 [2024-11-19 11:25:41.810616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.670 [2024-11-19 11:25:41.810702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.670 [2024-11-19 11:25:41.810711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.670 [2024-11-19 11:25:41.810716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.670 [2024-11-19 11:25:41.810721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.670 [2024-11-19 11:25:41.810731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.670 qpair failed and we were unable to recover it. 00:31:33.670 [2024-11-19 11:25:41.820652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.670 [2024-11-19 11:25:41.820707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.670 [2024-11-19 11:25:41.820726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.670 [2024-11-19 11:25:41.820732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.670 [2024-11-19 11:25:41.820737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.670 [2024-11-19 11:25:41.820758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.670 qpair failed and we were unable to recover it. 00:31:33.670 [2024-11-19 11:25:41.830683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.670 [2024-11-19 11:25:41.830734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.670 [2024-11-19 11:25:41.830745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.670 [2024-11-19 11:25:41.830750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.670 [2024-11-19 11:25:41.830755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.670 [2024-11-19 11:25:41.830766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.670 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.840726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.840777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.840787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.840792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.840796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.840807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.850647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.850690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.850700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.850705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.850709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.850719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.860762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.860811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.860821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.860826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.860830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.860840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.870806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.870865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.870875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.870880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.870884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.870894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.880833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.880891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.880901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.880906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.880910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.880920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.890836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.890885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.890895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.890900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.890905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.890915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.900918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.900971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.900980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.900985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.900990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.901000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.910914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.910964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.910974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.910982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.910986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.910996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.920947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.920999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.921009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.921014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.921018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.921028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.930821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.930873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.930883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.930888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.930892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.930902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.941011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.941071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.941081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.941086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.941090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.941100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.950909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.950984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.950995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.951001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.951005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.951019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.671 [2024-11-19 11:25:41.961084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.671 [2024-11-19 11:25:41.961138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.671 [2024-11-19 11:25:41.961149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.671 [2024-11-19 11:25:41.961154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.671 [2024-11-19 11:25:41.961158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.671 [2024-11-19 11:25:41.961169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.671 qpair failed and we were unable to recover it. 00:31:33.672 [2024-11-19 11:25:41.971054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.672 [2024-11-19 11:25:41.971109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.672 [2024-11-19 11:25:41.971119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.672 [2024-11-19 11:25:41.971124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.672 [2024-11-19 11:25:41.971128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.672 [2024-11-19 11:25:41.971138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-11-19 11:25:41.981115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.672 [2024-11-19 11:25:41.981163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.672 [2024-11-19 11:25:41.981173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.672 [2024-11-19 11:25:41.981178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.672 [2024-11-19 11:25:41.981182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.672 [2024-11-19 11:25:41.981192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-11-19 11:25:41.991145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.672 [2024-11-19 11:25:41.991193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.672 [2024-11-19 11:25:41.991203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.672 [2024-11-19 11:25:41.991208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.672 [2024-11-19 11:25:41.991213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.672 [2024-11-19 11:25:41.991222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-11-19 11:25:42.001181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.672 [2024-11-19 11:25:42.001235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.672 [2024-11-19 11:25:42.001244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.672 [2024-11-19 11:25:42.001249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.672 [2024-11-19 11:25:42.001253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.672 [2024-11-19 11:25:42.001263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.672 [2024-11-19 11:25:42.011164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.672 [2024-11-19 11:25:42.011211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.672 [2024-11-19 11:25:42.011221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.672 [2024-11-19 11:25:42.011226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.672 [2024-11-19 11:25:42.011230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.672 [2024-11-19 11:25:42.011240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.672 qpair failed and we were unable to recover it. 00:31:33.935 [2024-11-19 11:25:42.021233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.935 [2024-11-19 11:25:42.021281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.935 [2024-11-19 11:25:42.021291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.935 [2024-11-19 11:25:42.021296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.935 [2024-11-19 11:25:42.021300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.935 [2024-11-19 11:25:42.021310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.935 qpair failed and we were unable to recover it. 00:31:33.935 [2024-11-19 11:25:42.031251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.935 [2024-11-19 11:25:42.031300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.935 [2024-11-19 11:25:42.031309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.935 [2024-11-19 11:25:42.031314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.935 [2024-11-19 11:25:42.031319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.935 [2024-11-19 11:25:42.031328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.935 qpair failed and we were unable to recover it. 00:31:33.935 [2024-11-19 11:25:42.041306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.935 [2024-11-19 11:25:42.041358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.935 [2024-11-19 11:25:42.041370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.935 [2024-11-19 11:25:42.041375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.935 [2024-11-19 11:25:42.041379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.935 [2024-11-19 11:25:42.041389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.935 qpair failed and we were unable to recover it. 00:31:33.935 [2024-11-19 11:25:42.051285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.935 [2024-11-19 11:25:42.051336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.935 [2024-11-19 11:25:42.051345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.935 [2024-11-19 11:25:42.051350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.935 [2024-11-19 11:25:42.051354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.935 [2024-11-19 11:25:42.051364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.935 qpair failed and we were unable to recover it. 00:31:33.935 [2024-11-19 11:25:42.061325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.935 [2024-11-19 11:25:42.061377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.935 [2024-11-19 11:25:42.061387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.935 [2024-11-19 11:25:42.061391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.935 [2024-11-19 11:25:42.061396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.935 [2024-11-19 11:25:42.061406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.935 qpair failed and we were unable to recover it. 00:31:33.935 [2024-11-19 11:25:42.071377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.935 [2024-11-19 11:25:42.071438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.935 [2024-11-19 11:25:42.071448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.935 [2024-11-19 11:25:42.071453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.935 [2024-11-19 11:25:42.071457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.935 [2024-11-19 11:25:42.071467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.935 qpair failed and we were unable to recover it. 00:31:33.935 [2024-11-19 11:25:42.081381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.935 [2024-11-19 11:25:42.081438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.935 [2024-11-19 11:25:42.081448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.935 [2024-11-19 11:25:42.081452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.935 [2024-11-19 11:25:42.081459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.935 [2024-11-19 11:25:42.081469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.935 qpair failed and we were unable to recover it. 00:31:33.935 [2024-11-19 11:25:42.091418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.935 [2024-11-19 11:25:42.091460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.935 [2024-11-19 11:25:42.091470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.935 [2024-11-19 11:25:42.091475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.091479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.091489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.101454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.101499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.101509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.101513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.101518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.101527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.111468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.111516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.111525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.111530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.111534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.111544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.121517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.121568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.121578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.121582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.121587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.121596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.131522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.131577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.131586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.131591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.131596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.131605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.141525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.141571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.141589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.141595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.141600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.141614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.151466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.151536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.151547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.151552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.151557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.151567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.161497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.161546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.161556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.161561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.161566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.161576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.171641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.171688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.171704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.171709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.171713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.171725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.181672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.181728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.181746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.181751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.181756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.181770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.191612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.191673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.191684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.191689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.191693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.191705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.201712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.201761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.201771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.201776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.201780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.201790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.211614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.211660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.211669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.211674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.211682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.936 [2024-11-19 11:25:42.211692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.936 qpair failed and we were unable to recover it. 00:31:33.936 [2024-11-19 11:25:42.221741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.936 [2024-11-19 11:25:42.221788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.936 [2024-11-19 11:25:42.221798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.936 [2024-11-19 11:25:42.221803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.936 [2024-11-19 11:25:42.221808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.937 [2024-11-19 11:25:42.221818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.937 qpair failed and we were unable to recover it. 00:31:33.937 [2024-11-19 11:25:42.231801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.937 [2024-11-19 11:25:42.231850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.937 [2024-11-19 11:25:42.231860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.937 [2024-11-19 11:25:42.231868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.937 [2024-11-19 11:25:42.231873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.937 [2024-11-19 11:25:42.231883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.937 qpair failed and we were unable to recover it. 00:31:33.937 [2024-11-19 11:25:42.241897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.937 [2024-11-19 11:25:42.241953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.937 [2024-11-19 11:25:42.241963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.937 [2024-11-19 11:25:42.241968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.937 [2024-11-19 11:25:42.241972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.937 [2024-11-19 11:25:42.241982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.937 qpair failed and we were unable to recover it. 00:31:33.937 [2024-11-19 11:25:42.251896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.937 [2024-11-19 11:25:42.251966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.937 [2024-11-19 11:25:42.251976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.937 [2024-11-19 11:25:42.251981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.937 [2024-11-19 11:25:42.251985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.937 [2024-11-19 11:25:42.251996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.937 qpair failed and we were unable to recover it. 00:31:33.937 [2024-11-19 11:25:42.261851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.937 [2024-11-19 11:25:42.261902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.937 [2024-11-19 11:25:42.261912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.937 [2024-11-19 11:25:42.261917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.937 [2024-11-19 11:25:42.261921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.937 [2024-11-19 11:25:42.261931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.937 qpair failed and we were unable to recover it. 00:31:33.937 [2024-11-19 11:25:42.271957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.937 [2024-11-19 11:25:42.272031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.937 [2024-11-19 11:25:42.272041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.937 [2024-11-19 11:25:42.272046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.937 [2024-11-19 11:25:42.272050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.937 [2024-11-19 11:25:42.272060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.937 qpair failed and we were unable to recover it. 00:31:33.937 [2024-11-19 11:25:42.281948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:33.937 [2024-11-19 11:25:42.282007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:33.937 [2024-11-19 11:25:42.282018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:33.937 [2024-11-19 11:25:42.282023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:33.937 [2024-11-19 11:25:42.282027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:33.937 [2024-11-19 11:25:42.282038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.937 qpair failed and we were unable to recover it. 00:31:34.199 [2024-11-19 11:25:42.291961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.199 [2024-11-19 11:25:42.292025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.199 [2024-11-19 11:25:42.292035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.199 [2024-11-19 11:25:42.292040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.199 [2024-11-19 11:25:42.292045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.199 [2024-11-19 11:25:42.292055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-11-19 11:25:42.301972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.199 [2024-11-19 11:25:42.302027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.199 [2024-11-19 11:25:42.302037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.199 [2024-11-19 11:25:42.302042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.199 [2024-11-19 11:25:42.302046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.199 [2024-11-19 11:25:42.302056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-11-19 11:25:42.311931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.199 [2024-11-19 11:25:42.311981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.199 [2024-11-19 11:25:42.311991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.199 [2024-11-19 11:25:42.311996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.199 [2024-11-19 11:25:42.312001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.199 [2024-11-19 11:25:42.312011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-11-19 11:25:42.322045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.199 [2024-11-19 11:25:42.322094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.199 [2024-11-19 11:25:42.322104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.199 [2024-11-19 11:25:42.322109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.199 [2024-11-19 11:25:42.322114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.199 [2024-11-19 11:25:42.322124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-11-19 11:25:42.331967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.199 [2024-11-19 11:25:42.332023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.199 [2024-11-19 11:25:42.332034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.199 [2024-11-19 11:25:42.332039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.199 [2024-11-19 11:25:42.332043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.199 [2024-11-19 11:25:42.332053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-11-19 11:25:42.342119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.199 [2024-11-19 11:25:42.342170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.199 [2024-11-19 11:25:42.342180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.199 [2024-11-19 11:25:42.342188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.199 [2024-11-19 11:25:42.342192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.199 [2024-11-19 11:25:42.342203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-11-19 11:25:42.352173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.199 [2024-11-19 11:25:42.352236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.199 [2024-11-19 11:25:42.352246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.199 [2024-11-19 11:25:42.352251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.199 [2024-11-19 11:25:42.352255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.352265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.362188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.362245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.362255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.362260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.362264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.362274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.372211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.372254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.372264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.372269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.372273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.372283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.382100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.382149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.382159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.382163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.382168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.382181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.392268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.392315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.392324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.392329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.392334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.392344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.402276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.402330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.402339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.402344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.402349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.402359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.412241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.412285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.412294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.412300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.412304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.412314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.422327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.422379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.422389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.422393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.422398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.422408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.432349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.432401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.432410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.432415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.432420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.432430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.442416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.442489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.442498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.442503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.442507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.442517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.452315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.452358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.452368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.452373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.452377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.452387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.462464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.462508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.462517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.462522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.462526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.462536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.472538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.472596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.200 [2024-11-19 11:25:42.472608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.200 [2024-11-19 11:25:42.472613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.200 [2024-11-19 11:25:42.472617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.200 [2024-11-19 11:25:42.472627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-11-19 11:25:42.482538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.200 [2024-11-19 11:25:42.482585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.201 [2024-11-19 11:25:42.482594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.201 [2024-11-19 11:25:42.482599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.201 [2024-11-19 11:25:42.482604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.201 [2024-11-19 11:25:42.482613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.201 [2024-11-19 11:25:42.492553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.201 [2024-11-19 11:25:42.492598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.201 [2024-11-19 11:25:42.492608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.201 [2024-11-19 11:25:42.492613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.201 [2024-11-19 11:25:42.492617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.201 [2024-11-19 11:25:42.492627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.201 [2024-11-19 11:25:42.502488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.201 [2024-11-19 11:25:42.502532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.201 [2024-11-19 11:25:42.502542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.201 [2024-11-19 11:25:42.502546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.201 [2024-11-19 11:25:42.502551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.201 [2024-11-19 11:25:42.502561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.201 [2024-11-19 11:25:42.512633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.201 [2024-11-19 11:25:42.512683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.201 [2024-11-19 11:25:42.512693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.201 [2024-11-19 11:25:42.512698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.201 [2024-11-19 11:25:42.512703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.201 [2024-11-19 11:25:42.512715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.201 [2024-11-19 11:25:42.522673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.201 [2024-11-19 11:25:42.522737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.201 [2024-11-19 11:25:42.522747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.201 [2024-11-19 11:25:42.522751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.201 [2024-11-19 11:25:42.522756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.201 [2024-11-19 11:25:42.522766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.201 [2024-11-19 11:25:42.532565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.201 [2024-11-19 11:25:42.532616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.201 [2024-11-19 11:25:42.532626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.201 [2024-11-19 11:25:42.532631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.201 [2024-11-19 11:25:42.532635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.201 [2024-11-19 11:25:42.532645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.201 [2024-11-19 11:25:42.542704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.201 [2024-11-19 11:25:42.542748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.201 [2024-11-19 11:25:42.542758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.201 [2024-11-19 11:25:42.542763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.201 [2024-11-19 11:25:42.542767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.201 [2024-11-19 11:25:42.542777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.464 [2024-11-19 11:25:42.552724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.464 [2024-11-19 11:25:42.552773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.464 [2024-11-19 11:25:42.552782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.464 [2024-11-19 11:25:42.552787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.464 [2024-11-19 11:25:42.552792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.464 [2024-11-19 11:25:42.552801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.464 qpair failed and we were unable to recover it. 00:31:34.464 [2024-11-19 11:25:42.562648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.464 [2024-11-19 11:25:42.562697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.464 [2024-11-19 11:25:42.562708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.464 [2024-11-19 11:25:42.562712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.464 [2024-11-19 11:25:42.562717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.464 [2024-11-19 11:25:42.562727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.464 qpair failed and we were unable to recover it. 00:31:34.464 [2024-11-19 11:25:42.572669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.464 [2024-11-19 11:25:42.572722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.464 [2024-11-19 11:25:42.572731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.464 [2024-11-19 11:25:42.572736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.464 [2024-11-19 11:25:42.572740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.464 [2024-11-19 11:25:42.572750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.464 qpair failed and we were unable to recover it. 00:31:34.464 [2024-11-19 11:25:42.582723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.464 [2024-11-19 11:25:42.582782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.464 [2024-11-19 11:25:42.582791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.464 [2024-11-19 11:25:42.582796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.582800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.582810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.592839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.592899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.592908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.465 [2024-11-19 11:25:42.592913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.592918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.592927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.602897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.602952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.602968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.465 [2024-11-19 11:25:42.602973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.602977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.602987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.612795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.612844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.612853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.465 [2024-11-19 11:25:42.612859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.612866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.612876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.622908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.622963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.622973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.465 [2024-11-19 11:25:42.622977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.622982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.622992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.632949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.632998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.633008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.465 [2024-11-19 11:25:42.633012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.633017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.633027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.643013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.643063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.643072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.465 [2024-11-19 11:25:42.643077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.643084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.643094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.652971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.653020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.653030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.465 [2024-11-19 11:25:42.653034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.653039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.653049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.663039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.663118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.663129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.465 [2024-11-19 11:25:42.663134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.465 [2024-11-19 11:25:42.663138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.465 [2024-11-19 11:25:42.663148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.465 qpair failed and we were unable to recover it. 00:31:34.465 [2024-11-19 11:25:42.673088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.465 [2024-11-19 11:25:42.673141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.465 [2024-11-19 11:25:42.673151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.673156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.673160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.673170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.466 [2024-11-19 11:25:42.683147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.466 [2024-11-19 11:25:42.683228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.466 [2024-11-19 11:25:42.683238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.683243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.683247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.683257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.466 [2024-11-19 11:25:42.693047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.466 [2024-11-19 11:25:42.693104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.466 [2024-11-19 11:25:42.693114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.693118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.693123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.693133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.466 [2024-11-19 11:25:42.703169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.466 [2024-11-19 11:25:42.703257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.466 [2024-11-19 11:25:42.703266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.703271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.703275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.703285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.466 [2024-11-19 11:25:42.713196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.466 [2024-11-19 11:25:42.713246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.466 [2024-11-19 11:25:42.713255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.713260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.713264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.713274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.466 [2024-11-19 11:25:42.723210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.466 [2024-11-19 11:25:42.723261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.466 [2024-11-19 11:25:42.723270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.723275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.723279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.723289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.466 [2024-11-19 11:25:42.733243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.466 [2024-11-19 11:25:42.733290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.466 [2024-11-19 11:25:42.733302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.733307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.733311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.733321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.466 [2024-11-19 11:25:42.743353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.466 [2024-11-19 11:25:42.743409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.466 [2024-11-19 11:25:42.743419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.743424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.743428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.743438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.466 [2024-11-19 11:25:42.753344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.466 [2024-11-19 11:25:42.753392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.466 [2024-11-19 11:25:42.753401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.466 [2024-11-19 11:25:42.753406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.466 [2024-11-19 11:25:42.753410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.466 [2024-11-19 11:25:42.753420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.466 qpair failed and we were unable to recover it. 00:31:34.467 [2024-11-19 11:25:42.763371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.467 [2024-11-19 11:25:42.763419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.467 [2024-11-19 11:25:42.763429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.467 [2024-11-19 11:25:42.763433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.467 [2024-11-19 11:25:42.763438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.467 [2024-11-19 11:25:42.763448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.467 qpair failed and we were unable to recover it. 00:31:34.467 [2024-11-19 11:25:42.773379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.467 [2024-11-19 11:25:42.773425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.467 [2024-11-19 11:25:42.773434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.467 [2024-11-19 11:25:42.773442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.467 [2024-11-19 11:25:42.773446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.467 [2024-11-19 11:25:42.773456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.467 qpair failed and we were unable to recover it. 00:31:34.467 [2024-11-19 11:25:42.783375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.467 [2024-11-19 11:25:42.783418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.467 [2024-11-19 11:25:42.783428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.467 [2024-11-19 11:25:42.783433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.467 [2024-11-19 11:25:42.783437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.467 [2024-11-19 11:25:42.783446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.467 qpair failed and we were unable to recover it. 00:31:34.467 [2024-11-19 11:25:42.793398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.467 [2024-11-19 11:25:42.793451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.467 [2024-11-19 11:25:42.793460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.467 [2024-11-19 11:25:42.793465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.467 [2024-11-19 11:25:42.793470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.467 [2024-11-19 11:25:42.793480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.467 qpair failed and we were unable to recover it. 00:31:34.467 [2024-11-19 11:25:42.803455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.467 [2024-11-19 11:25:42.803533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.467 [2024-11-19 11:25:42.803542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.467 [2024-11-19 11:25:42.803547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.467 [2024-11-19 11:25:42.803551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.467 [2024-11-19 11:25:42.803561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.467 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.813482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.813569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.730 [2024-11-19 11:25:42.813579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.730 [2024-11-19 11:25:42.813584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.730 [2024-11-19 11:25:42.813588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.730 [2024-11-19 11:25:42.813598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.730 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.823474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.823530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.730 [2024-11-19 11:25:42.823540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.730 [2024-11-19 11:25:42.823545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.730 [2024-11-19 11:25:42.823549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.730 [2024-11-19 11:25:42.823559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.730 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.833463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.833512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.730 [2024-11-19 11:25:42.833521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.730 [2024-11-19 11:25:42.833526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.730 [2024-11-19 11:25:42.833531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.730 [2024-11-19 11:25:42.833540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.730 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.843490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.843581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.730 [2024-11-19 11:25:42.843590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.730 [2024-11-19 11:25:42.843595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.730 [2024-11-19 11:25:42.843599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.730 [2024-11-19 11:25:42.843609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.730 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.853589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.853640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.730 [2024-11-19 11:25:42.853650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.730 [2024-11-19 11:25:42.853655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.730 [2024-11-19 11:25:42.853659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.730 [2024-11-19 11:25:42.853669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.730 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.863606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.863659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.730 [2024-11-19 11:25:42.863669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.730 [2024-11-19 11:25:42.863674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.730 [2024-11-19 11:25:42.863678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.730 [2024-11-19 11:25:42.863687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.730 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.873629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.873681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.730 [2024-11-19 11:25:42.873699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.730 [2024-11-19 11:25:42.873705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.730 [2024-11-19 11:25:42.873710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.730 [2024-11-19 11:25:42.873724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.730 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.883684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.883760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.730 [2024-11-19 11:25:42.883771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.730 [2024-11-19 11:25:42.883776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.730 [2024-11-19 11:25:42.883781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.730 [2024-11-19 11:25:42.883791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.730 qpair failed and we were unable to recover it. 00:31:34.730 [2024-11-19 11:25:42.893673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.730 [2024-11-19 11:25:42.893721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.893732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.893736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.893741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.893751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.903713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.903758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.903768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.903776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.903781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.903791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.913701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.913753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.913762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.913767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.913771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.913781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.923762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.923816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.923825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.923830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.923835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.923845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.933656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.933706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.933716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.933721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.933725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.933735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.943833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.943935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.943944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.943949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.943953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.943967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.953844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.953898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.953907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.953912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.953916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.953926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.963929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.963983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.963993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.963998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.964003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.964013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.973888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.973935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.973945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.973950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.973954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.973965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.983941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.983986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.983996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.984001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.984005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.984015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:42.993882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:42.993936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:42.993946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:42.993951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:42.993955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:42.993965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:43.003982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:43.004078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:43.004087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:43.004092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:43.004097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:43.004107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:43.014021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:43.014066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.731 [2024-11-19 11:25:43.014076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.731 [2024-11-19 11:25:43.014081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.731 [2024-11-19 11:25:43.014085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.731 [2024-11-19 11:25:43.014095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.731 qpair failed and we were unable to recover it. 00:31:34.731 [2024-11-19 11:25:43.024017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.731 [2024-11-19 11:25:43.024116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.732 [2024-11-19 11:25:43.024126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.732 [2024-11-19 11:25:43.024131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.732 [2024-11-19 11:25:43.024136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.732 [2024-11-19 11:25:43.024146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.732 qpair failed and we were unable to recover it. 00:31:34.732 [2024-11-19 11:25:43.034100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.732 [2024-11-19 11:25:43.034154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.732 [2024-11-19 11:25:43.034166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.732 [2024-11-19 11:25:43.034171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.732 [2024-11-19 11:25:43.034176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.732 [2024-11-19 11:25:43.034186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.732 qpair failed and we were unable to recover it. 00:31:34.732 [2024-11-19 11:25:43.044129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.732 [2024-11-19 11:25:43.044176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.732 [2024-11-19 11:25:43.044186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.732 [2024-11-19 11:25:43.044191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.732 [2024-11-19 11:25:43.044196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.732 [2024-11-19 11:25:43.044205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.732 qpair failed and we were unable to recover it. 00:31:34.732 [2024-11-19 11:25:43.054144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.732 [2024-11-19 11:25:43.054193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.732 [2024-11-19 11:25:43.054202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.732 [2024-11-19 11:25:43.054207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.732 [2024-11-19 11:25:43.054212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.732 [2024-11-19 11:25:43.054221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.732 qpair failed and we were unable to recover it. 00:31:34.732 [2024-11-19 11:25:43.064143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.732 [2024-11-19 11:25:43.064195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.732 [2024-11-19 11:25:43.064205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.732 [2024-11-19 11:25:43.064210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.732 [2024-11-19 11:25:43.064214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.732 [2024-11-19 11:25:43.064224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.732 qpair failed and we were unable to recover it. 00:31:34.732 [2024-11-19 11:25:43.074187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.732 [2024-11-19 11:25:43.074236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.732 [2024-11-19 11:25:43.074245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.732 [2024-11-19 11:25:43.074250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.732 [2024-11-19 11:25:43.074255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.732 [2024-11-19 11:25:43.074268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.732 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.084231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.084290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.084300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.084304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.084309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.084319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.094241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.094288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.094297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.094302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.094307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.094316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.104280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.104328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.104337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.104342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.104346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.104356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.114173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.114222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.114232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.114237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.114242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.114252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.124392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.124467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.124477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.124482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.124486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.124497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.134352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.134395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.134405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.134410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.134414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.134424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.144385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.144431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.144441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.144445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.144450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.144460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.154289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.154355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.154364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.154369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.154373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.154383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.164478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.164529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.164542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.164547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.164551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.164561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.995 [2024-11-19 11:25:43.174476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.995 [2024-11-19 11:25:43.174521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.995 [2024-11-19 11:25:43.174531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.995 [2024-11-19 11:25:43.174536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.995 [2024-11-19 11:25:43.174541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.995 [2024-11-19 11:25:43.174550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.995 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.184478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.184530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.184540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.184544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.184549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.184558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.194510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.194574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.194592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.194598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.194603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.194616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.204576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.204626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.204644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.204650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.204658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.204673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.214588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.214635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.214646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.214651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.214656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.214666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.224613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.224662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.224680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.224686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.224691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.224704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.234651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.234705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.234716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.234721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.234726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.234736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.244698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.244754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.244764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.244769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.244773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.244783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.254579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.254624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.254634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.254639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.254644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.254654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.264697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.264747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.264758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.264763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.264767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.264777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.274768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.274819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.274828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.274833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.274838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.274848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.284790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.284836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.284846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.284851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.284855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.284868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.294796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.294873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.294886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.294891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.294895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.294905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.996 qpair failed and we were unable to recover it. 00:31:34.996 [2024-11-19 11:25:43.304851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.996 [2024-11-19 11:25:43.304902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.996 [2024-11-19 11:25:43.304912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.996 [2024-11-19 11:25:43.304917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.996 [2024-11-19 11:25:43.304921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.996 [2024-11-19 11:25:43.304931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-11-19 11:25:43.314749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.997 [2024-11-19 11:25:43.314811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.997 [2024-11-19 11:25:43.314822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.997 [2024-11-19 11:25:43.314827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.997 [2024-11-19 11:25:43.314831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.997 [2024-11-19 11:25:43.314841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-11-19 11:25:43.324913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.997 [2024-11-19 11:25:43.324968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.997 [2024-11-19 11:25:43.324978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.997 [2024-11-19 11:25:43.324983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.997 [2024-11-19 11:25:43.324988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.997 [2024-11-19 11:25:43.324998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.997 qpair failed and we were unable to recover it. 00:31:34.997 [2024-11-19 11:25:43.334956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:34.997 [2024-11-19 11:25:43.335003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:34.997 [2024-11-19 11:25:43.335013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:34.997 [2024-11-19 11:25:43.335023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:34.997 [2024-11-19 11:25:43.335027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:34.997 [2024-11-19 11:25:43.335038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.997 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.344942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.344998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.345008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.345013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.345018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.345028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.354851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.354926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.354936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.354942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.354947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.354957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.365009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.365097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.365107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.365112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.365116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.365126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.375016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.375067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.375076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.375081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.375086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.375096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.385058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.385108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.385117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.385122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.385126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.385136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.395097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.395146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.395156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.395161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.395165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.395175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.405004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.405055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.405064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.405069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.405074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.405083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.415014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.415059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.415068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.415073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.415078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.415087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.425139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.425189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.425199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.425204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.425208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.425218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.435195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.435243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.435253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.435257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.435262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.435271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.445225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.445278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.445288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.445293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.445297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.445307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.455247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.455299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.455308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.455313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.261 [2024-11-19 11:25:43.455317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.261 [2024-11-19 11:25:43.455327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-19 11:25:43.465150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.261 [2024-11-19 11:25:43.465196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.261 [2024-11-19 11:25:43.465206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.261 [2024-11-19 11:25:43.465214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.465218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.465228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.475232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.475280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.475290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.475295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.475299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.475309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.485363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.485415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.485425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.485430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.485435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.485444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.495384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.495465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.495475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.495480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.495485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.495495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.505221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.505259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.505268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.505273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.505278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.505291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.515289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.515338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.515348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.515353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.515357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.515368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.525458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.525508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.525518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.525523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.525528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.525538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.535461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.535524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.535534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.535539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.535543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.535553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.545434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.545475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.545484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.545489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.545493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.545503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.555556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.555609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.555619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.555623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.555628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.555638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.565565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.565617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.565627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.565632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.565636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.565646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.575570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.575620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.575629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.575634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.575638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.575648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.585565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.585608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.585618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.585623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.262 [2024-11-19 11:25:43.585627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.262 [2024-11-19 11:25:43.585637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.262 qpair failed and we were unable to recover it. 00:31:35.262 [2024-11-19 11:25:43.595642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.262 [2024-11-19 11:25:43.595730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.262 [2024-11-19 11:25:43.595751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.262 [2024-11-19 11:25:43.595757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.263 [2024-11-19 11:25:43.595762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.263 [2024-11-19 11:25:43.595776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.263 qpair failed and we were unable to recover it. 00:31:35.263 [2024-11-19 11:25:43.605680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.263 [2024-11-19 11:25:43.605754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.263 [2024-11-19 11:25:43.605765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.263 [2024-11-19 11:25:43.605770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.263 [2024-11-19 11:25:43.605774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.263 [2024-11-19 11:25:43.605785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.263 qpair failed and we were unable to recover it. 00:31:35.526 [2024-11-19 11:25:43.615648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.526 [2024-11-19 11:25:43.615692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.526 [2024-11-19 11:25:43.615702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.526 [2024-11-19 11:25:43.615707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.526 [2024-11-19 11:25:43.615712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.526 [2024-11-19 11:25:43.615722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.526 qpair failed and we were unable to recover it. 00:31:35.526 [2024-11-19 11:25:43.625682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.526 [2024-11-19 11:25:43.625731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.526 [2024-11-19 11:25:43.625741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.526 [2024-11-19 11:25:43.625746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.526 [2024-11-19 11:25:43.625750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.526 [2024-11-19 11:25:43.625760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.526 qpair failed and we were unable to recover it. 00:31:35.526 [2024-11-19 11:25:43.635768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.526 [2024-11-19 11:25:43.635816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.526 [2024-11-19 11:25:43.635826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.526 [2024-11-19 11:25:43.635831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.526 [2024-11-19 11:25:43.635838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.526 [2024-11-19 11:25:43.635848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.526 qpair failed and we were unable to recover it. 00:31:35.526 [2024-11-19 11:25:43.645761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.526 [2024-11-19 11:25:43.645808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.526 [2024-11-19 11:25:43.645817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.526 [2024-11-19 11:25:43.645822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.526 [2024-11-19 11:25:43.645827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.526 [2024-11-19 11:25:43.645837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.526 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.655767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.655812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.655822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.655827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.655831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.655841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.665770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.665813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.665824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.665829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.665834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.665844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.675874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.675936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.675945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.675950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.675955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.675965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.685860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.685920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.685930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.685935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.685939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.685949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.695883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.695926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.695936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.695941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.695945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.695955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.705946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.705996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.706006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.706011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.706015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.706025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.715959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.716011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.716022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.716027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.716031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.716041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.725977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.726032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.726046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.726052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.726056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.726067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.735986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.736057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.736067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.736072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.736076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.736086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.745958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.746002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.746012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.746017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.746021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.746031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.756045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.756096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.756106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.756111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.756115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.756125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.766127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.527 [2024-11-19 11:25:43.766185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.527 [2024-11-19 11:25:43.766196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.527 [2024-11-19 11:25:43.766201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.527 [2024-11-19 11:25:43.766208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.527 [2024-11-19 11:25:43.766218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.527 qpair failed and we were unable to recover it. 00:31:35.527 [2024-11-19 11:25:43.776060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.776113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.776123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.776128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.776132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.776142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.786092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.786135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.786144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.786149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.786153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.786163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.796151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.796204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.796213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.796218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.796223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.796233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.806082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.806128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.806137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.806142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.806147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.806157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.816171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.816218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.816228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.816233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.816238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.816247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.826208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.826250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.826260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.826265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.826269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.826279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.836282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.836350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.836360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.836365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.836369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.836379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.846191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.846241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.846250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.846255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.846260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.846269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.856279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.856319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.856331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.856336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.856340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.856350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.528 [2024-11-19 11:25:43.866304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.528 [2024-11-19 11:25:43.866361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.528 [2024-11-19 11:25:43.866370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.528 [2024-11-19 11:25:43.866375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.528 [2024-11-19 11:25:43.866379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.528 [2024-11-19 11:25:43.866389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.528 qpair failed and we were unable to recover it. 00:31:35.791 [2024-11-19 11:25:43.876345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.791 [2024-11-19 11:25:43.876387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.791 [2024-11-19 11:25:43.876398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.791 [2024-11-19 11:25:43.876403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.791 [2024-11-19 11:25:43.876407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.791 [2024-11-19 11:25:43.876417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.791 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.886450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.886497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.886507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.886512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.886517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.886526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.896385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.896429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.896438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.896446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.896451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.896460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.906275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.906313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.906324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.906329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.906333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.906343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.916345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.916385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.916396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.916401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.916405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.916415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.926516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.926565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.926575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.926579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.926584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.926594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.936553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.936605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.936615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.936620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.936624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.936633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.946511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.946551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.946560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.946565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.946570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.946579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.956530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.956572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.956581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.956586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.956590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.956600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.966484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.966530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.966541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.966546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.966550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.966560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.976599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.976642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.976652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.976657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.976661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.976670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.986635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.986676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.986686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.986691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.986696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.986706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:43.996655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:43.996695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:43.996704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:43.996709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:43.996714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:43.996723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.792 qpair failed and we were unable to recover it. 00:31:35.792 [2024-11-19 11:25:44.006615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.792 [2024-11-19 11:25:44.006662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.792 [2024-11-19 11:25:44.006672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.792 [2024-11-19 11:25:44.006677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.792 [2024-11-19 11:25:44.006681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.792 [2024-11-19 11:25:44.006691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.016676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.016714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.016723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.016728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.016733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.016742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.026718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.026758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.026768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.026775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.026780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.026790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.036712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.036771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.036780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.036785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.036790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.036799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.046842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.046894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.046904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.046909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.046913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.046923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.056796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.056837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.056847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.056852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.056856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.056868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.066699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.066740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.066751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.066756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.066761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.066777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.076916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.076976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.076986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.076991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.076995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.077005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.086808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.086857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.086870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.086874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.086879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.086889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.096922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.097008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.097018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.097023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.097028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.097038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.106967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.107006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.107015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.107020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.107025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.107035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.116975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.117016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.117026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.117031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.117035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.117045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.126964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.127010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.127020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.127025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.127029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.127039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:35.793 [2024-11-19 11:25:44.137035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:35.793 [2024-11-19 11:25:44.137074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:35.793 [2024-11-19 11:25:44.137083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:35.793 [2024-11-19 11:25:44.137088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:35.793 [2024-11-19 11:25:44.137092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:35.793 [2024-11-19 11:25:44.137102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:35.793 qpair failed and we were unable to recover it. 00:31:36.056 [2024-11-19 11:25:44.147054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.056 [2024-11-19 11:25:44.147093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.056 [2024-11-19 11:25:44.147102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.056 [2024-11-19 11:25:44.147107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.056 [2024-11-19 11:25:44.147111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.056 [2024-11-19 11:25:44.147121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.056 qpair failed and we were unable to recover it. 00:31:36.056 [2024-11-19 11:25:44.157041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.056 [2024-11-19 11:25:44.157083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.056 [2024-11-19 11:25:44.157096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.056 [2024-11-19 11:25:44.157101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.056 [2024-11-19 11:25:44.157106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.056 [2024-11-19 11:25:44.157117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.056 qpair failed and we were unable to recover it. 00:31:36.056 [2024-11-19 11:25:44.167101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.056 [2024-11-19 11:25:44.167156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.056 [2024-11-19 11:25:44.167166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.056 [2024-11-19 11:25:44.167171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.056 [2024-11-19 11:25:44.167175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.167185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.177148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.177185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.177195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.177200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.177204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.177214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.187140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.187179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.187188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.187193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.187197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.187207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.197157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.197197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.197208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.197212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.197219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.197230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.207265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.207307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.207317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.207322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.207326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.207336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.217088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.217131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.217141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.217146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.217151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.217162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.227289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.227368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.227378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.227383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.227387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.227397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.237284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.237325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.237335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.237340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.237344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.237354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.247332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.247375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.247385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.247389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.247394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.247403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.257338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.257384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.257394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.257399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.257403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.257413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.267379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.267423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.267433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.267438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.267442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.267452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.277401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.277449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.277459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.277464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.277468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.277478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.287449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.287494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.287506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.287510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.287515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.057 [2024-11-19 11:25:44.287525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.057 qpair failed and we were unable to recover it. 00:31:36.057 [2024-11-19 11:25:44.297321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.057 [2024-11-19 11:25:44.297365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.057 [2024-11-19 11:25:44.297376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.057 [2024-11-19 11:25:44.297381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.057 [2024-11-19 11:25:44.297385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.297395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.307484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.307525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.307535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.307539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.307544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.307554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.317528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.317571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.317580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.317585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.317589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.317599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.327543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.327588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.327597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.327602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.327609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.327619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.337569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.337609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.337618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.337623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.337628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.337637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.347578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.347618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.347627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.347632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.347637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.347647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.357586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.357629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.357638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.357643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.357648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.357657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.367652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.367739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.367749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.367754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.367758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.367768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.377657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.377740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.377750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.377755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.377759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.377769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.387684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.387742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.387751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.387756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.387760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.387770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.058 [2024-11-19 11:25:44.397632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.058 [2024-11-19 11:25:44.397671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.058 [2024-11-19 11:25:44.397681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.058 [2024-11-19 11:25:44.397686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.058 [2024-11-19 11:25:44.397690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.058 [2024-11-19 11:25:44.397700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.058 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.407711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.407750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.407759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.407764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.407769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.321 [2024-11-19 11:25:44.407778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.321 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.417796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.417834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.417846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.417851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.417856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.321 [2024-11-19 11:25:44.417869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.321 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.427820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.427907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.427917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.427921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.427926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.321 [2024-11-19 11:25:44.427935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.321 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.437814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.437853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.437866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.437871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.437875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.321 [2024-11-19 11:25:44.437885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.321 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.447822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.447894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.447903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.447908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.447912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.321 [2024-11-19 11:25:44.447922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.321 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.457865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.457900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.457910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.457918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.457922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.321 [2024-11-19 11:25:44.457932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.321 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.467912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.467984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.467993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.467998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.468002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.321 [2024-11-19 11:25:44.468012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.321 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.477784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.477824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.477835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.477839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.477844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.321 [2024-11-19 11:25:44.477854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.321 qpair failed and we were unable to recover it. 00:31:36.321 [2024-11-19 11:25:44.487971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.321 [2024-11-19 11:25:44.488016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.321 [2024-11-19 11:25:44.488027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.321 [2024-11-19 11:25:44.488031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.321 [2024-11-19 11:25:44.488036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.488046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.497965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.498008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.498018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.498023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.498027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.498037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.508008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.508057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.508067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.508072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.508076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.508086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.518006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.518097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.518107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.518112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.518116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.518126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.528063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.528110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.528119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.528124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.528128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.528138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.537958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.537999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.538010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.538014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.538019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.538029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.548117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.548162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.548171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.548176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.548180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.548190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.558001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.558039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.558050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.558055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.558059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.558069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.568152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.568198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.568207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.568212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.568217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.568226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.578155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.578241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.578251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.578256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.578260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.578270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.588207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.588248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.588257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.588265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.588269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.588279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.598241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.598281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.598291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.598296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.598301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.598310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.608286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.608328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.608337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.608342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.608346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.608356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.322 qpair failed and we were unable to recover it. 00:31:36.322 [2024-11-19 11:25:44.618291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.322 [2024-11-19 11:25:44.618337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.322 [2024-11-19 11:25:44.618347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.322 [2024-11-19 11:25:44.618351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.322 [2024-11-19 11:25:44.618356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.322 [2024-11-19 11:25:44.618365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.323 qpair failed and we were unable to recover it. 00:31:36.323 [2024-11-19 11:25:44.628317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.323 [2024-11-19 11:25:44.628362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.323 [2024-11-19 11:25:44.628371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.323 [2024-11-19 11:25:44.628376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.323 [2024-11-19 11:25:44.628380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.323 [2024-11-19 11:25:44.628393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.323 qpair failed and we were unable to recover it. 00:31:36.323 [2024-11-19 11:25:44.638328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.323 [2024-11-19 11:25:44.638372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.323 [2024-11-19 11:25:44.638382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.323 [2024-11-19 11:25:44.638387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.323 [2024-11-19 11:25:44.638391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.323 [2024-11-19 11:25:44.638401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.323 qpair failed and we were unable to recover it. 00:31:36.323 [2024-11-19 11:25:44.648247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.323 [2024-11-19 11:25:44.648289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.323 [2024-11-19 11:25:44.648299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.323 [2024-11-19 11:25:44.648303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.323 [2024-11-19 11:25:44.648308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.323 [2024-11-19 11:25:44.648318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.323 qpair failed and we were unable to recover it. 00:31:36.323 [2024-11-19 11:25:44.658265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.323 [2024-11-19 11:25:44.658304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.323 [2024-11-19 11:25:44.658314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.323 [2024-11-19 11:25:44.658319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.323 [2024-11-19 11:25:44.658323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.323 [2024-11-19 11:25:44.658333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.323 qpair failed and we were unable to recover it. 00:31:36.323 [2024-11-19 11:25:44.668424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.323 [2024-11-19 11:25:44.668478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.323 [2024-11-19 11:25:44.668488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.323 [2024-11-19 11:25:44.668493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.323 [2024-11-19 11:25:44.668497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.323 [2024-11-19 11:25:44.668507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.323 qpair failed and we were unable to recover it. 00:31:36.585 [2024-11-19 11:25:44.678451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.678492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.678502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.678507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.678511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.678521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.688495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.688534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.688544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.688548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.688553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.688562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.698503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.698544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.698554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.698559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.698563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.698573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.708489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.708570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.708580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.708584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.708589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.708598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.718572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.718643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.718655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.718659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.718664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.718673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.728607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.728650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.728661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.728666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.728670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.728680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.738656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.738737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.738747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.738752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.738756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.738766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.748652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.748690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.748700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.748705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.748709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.748719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.758678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.758748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.758758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.758763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.758773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.758783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.768755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.768795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.768805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.768810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.768814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.768824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.778716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.778761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.778770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.778776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.778780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.778790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.788721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.788764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.788774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.788779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.788783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.586 [2024-11-19 11:25:44.788793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.586 qpair failed and we were unable to recover it. 00:31:36.586 [2024-11-19 11:25:44.798643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.586 [2024-11-19 11:25:44.798685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.586 [2024-11-19 11:25:44.798695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.586 [2024-11-19 11:25:44.798700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.586 [2024-11-19 11:25:44.798705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.798715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.808676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.808716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.808726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.808731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.808736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.808746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.818834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.818880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.818890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.818894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.818899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.818909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.828865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.828950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.828960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.828965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.828969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.828979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.838757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.838801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.838810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.838815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.838819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.838829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.848903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.848944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.848956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.848961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.848965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.848976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.858941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.858985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.858995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.859000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.859004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.859014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.868966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.869005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.869014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.869019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.869023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.869033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.878872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.878915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.878925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.878929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.878934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.878944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.889049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.889090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.889099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.889104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.889111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.889121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.899067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.899103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.899112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.899117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.899121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.899131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.909094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.909135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.909144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.909149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.909153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.909163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.919004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.919045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.919055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.919060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.919064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.587 [2024-11-19 11:25:44.919074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.587 qpair failed and we were unable to recover it. 00:31:36.587 [2024-11-19 11:25:44.929054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.587 [2024-11-19 11:25:44.929097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.587 [2024-11-19 11:25:44.929107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.587 [2024-11-19 11:25:44.929112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.587 [2024-11-19 11:25:44.929118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.588 [2024-11-19 11:25:44.929128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.588 qpair failed and we were unable to recover it. 00:31:36.850 [2024-11-19 11:25:44.939170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.850 [2024-11-19 11:25:44.939209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.850 [2024-11-19 11:25:44.939219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.850 [2024-11-19 11:25:44.939224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.850 [2024-11-19 11:25:44.939229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.850 [2024-11-19 11:25:44.939239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.850 qpair failed and we were unable to recover it. 00:31:36.850 [2024-11-19 11:25:44.949306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.850 [2024-11-19 11:25:44.949351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.850 [2024-11-19 11:25:44.949360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.850 [2024-11-19 11:25:44.949365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.850 [2024-11-19 11:25:44.949370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.850 [2024-11-19 11:25:44.949379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.850 qpair failed and we were unable to recover it. 00:31:36.850 [2024-11-19 11:25:44.959209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.850 [2024-11-19 11:25:44.959253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.850 [2024-11-19 11:25:44.959264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.850 [2024-11-19 11:25:44.959269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.850 [2024-11-19 11:25:44.959273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.850 [2024-11-19 11:25:44.959283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.850 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:44.969256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:44.969333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:44.969343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:44.969347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:44.969352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:44.969362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:44.979295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:44.979333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:44.979345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:44.979350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:44.979355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:44.979364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:44.989301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:44.989374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:44.989384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:44.989388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:44.989393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:44.989402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:44.999333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:44.999373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:44.999382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:44.999387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:44.999391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:44.999401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.009369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.009409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.009419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.009424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.009428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.009439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.019366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.019455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.019464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.019472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.019477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.019486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.029265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.029309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.029319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.029323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.029328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.029337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.039422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.039478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.039488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.039493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.039497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.039507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.049472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.049518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.049528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.049533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.049538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.049548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.059442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.059480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.059490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.059495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.059500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.059513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.069542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.069622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.069631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.069636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.069641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.069651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.079527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.079572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.079581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.079586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.079591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.079601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.089447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.851 [2024-11-19 11:25:45.089492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.851 [2024-11-19 11:25:45.089502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.851 [2024-11-19 11:25:45.089507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.851 [2024-11-19 11:25:45.089512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.851 [2024-11-19 11:25:45.089522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.851 qpair failed and we were unable to recover it. 00:31:36.851 [2024-11-19 11:25:45.099600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.099647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.099665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.099671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.099676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.099690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.109630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.109677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.109695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.109701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.109706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.109720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.119652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.119695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.119707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.119712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.119716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.119728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.129715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.129758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.129769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.129774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.129778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.129788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.139699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.139780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.139790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.139795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.139800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.139810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.149703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.149741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.149751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.149759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.149764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.149774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.159746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.159789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.159800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.159804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.159809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.159819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.169800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.169842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.169853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.169858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.169867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.169880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.179734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.179775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.179785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.179790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.179794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.179805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.189838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.189887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.189896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.189901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.189906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.189919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:36.852 [2024-11-19 11:25:45.199736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.852 [2024-11-19 11:25:45.199788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.852 [2024-11-19 11:25:45.199798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.852 [2024-11-19 11:25:45.199802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.852 [2024-11-19 11:25:45.199807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:36.852 [2024-11-19 11:25:45.199817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.852 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.209915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.209959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.209969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.209975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.209979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.209989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.219926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.219965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.219975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.219980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.219984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.219995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.229937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.229975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.229985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.229990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.229994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.230004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.239839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.239885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.239896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.239901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.239905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.239916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.249917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.249979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.249988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.249993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.249998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.250007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.260044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.260083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.260093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.260098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.260102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.260112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.270067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.270148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.270158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.270163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.270168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.270177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.280093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.280153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.280166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.280170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.280175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.280185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.290127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.290175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.290185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.290190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.290194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.290204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.300150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.300189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.300199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.300204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.300208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.300218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.116 [2024-11-19 11:25:45.310150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.116 [2024-11-19 11:25:45.310189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.116 [2024-11-19 11:25:45.310198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.116 [2024-11-19 11:25:45.310203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.116 [2024-11-19 11:25:45.310207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.116 [2024-11-19 11:25:45.310217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.116 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.320198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.320240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.320249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.320254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.320261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.320271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.330244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.330287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.330297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.330301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.330306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.330316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.340220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.340260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.340271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.340275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.340280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.340289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.350241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.350282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.350292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.350297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.350302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.350311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.360211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.360250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.360261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.360266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.360270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.360280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.370332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.370377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.370387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.370392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.370396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.370406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.380330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.380367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.380376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.380381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.380385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.380396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.390360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.390429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.390438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.390443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.390448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.390457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.400422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.400463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.400473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.400478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.400482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.400492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.410447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.410492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.410504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.410509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.410513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.410523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.420473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.420513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.420523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.420528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.420532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.420542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.430481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.430520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.430529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.430534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.430538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.117 [2024-11-19 11:25:45.430548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.117 qpair failed and we were unable to recover it. 00:31:37.117 [2024-11-19 11:25:45.440377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.117 [2024-11-19 11:25:45.440422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.117 [2024-11-19 11:25:45.440432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.117 [2024-11-19 11:25:45.440437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.117 [2024-11-19 11:25:45.440441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.118 [2024-11-19 11:25:45.440451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-11-19 11:25:45.450520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.118 [2024-11-19 11:25:45.450562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.118 [2024-11-19 11:25:45.450571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.118 [2024-11-19 11:25:45.450576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.118 [2024-11-19 11:25:45.450586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.118 [2024-11-19 11:25:45.450596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.118 [2024-11-19 11:25:45.460559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.118 [2024-11-19 11:25:45.460597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.118 [2024-11-19 11:25:45.460607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.118 [2024-11-19 11:25:45.460611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.118 [2024-11-19 11:25:45.460616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.118 [2024-11-19 11:25:45.460626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.118 qpair failed and we were unable to recover it. 00:31:37.381 [2024-11-19 11:25:45.470592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.381 [2024-11-19 11:25:45.470679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.381 [2024-11-19 11:25:45.470688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.381 [2024-11-19 11:25:45.470693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.381 [2024-11-19 11:25:45.470697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.381 [2024-11-19 11:25:45.470707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-11-19 11:25:45.480627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.381 [2024-11-19 11:25:45.480670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.381 [2024-11-19 11:25:45.480680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.381 [2024-11-19 11:25:45.480685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.381 [2024-11-19 11:25:45.480689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.381 [2024-11-19 11:25:45.480699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-11-19 11:25:45.490663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.381 [2024-11-19 11:25:45.490702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.381 [2024-11-19 11:25:45.490712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.381 [2024-11-19 11:25:45.490717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.381 [2024-11-19 11:25:45.490721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.381 [2024-11-19 11:25:45.490731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-11-19 11:25:45.500534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.381 [2024-11-19 11:25:45.500594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.381 [2024-11-19 11:25:45.500603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.381 [2024-11-19 11:25:45.500608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.381 [2024-11-19 11:25:45.500613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.381 [2024-11-19 11:25:45.500623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-11-19 11:25:45.510726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.381 [2024-11-19 11:25:45.510773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.381 [2024-11-19 11:25:45.510782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.381 [2024-11-19 11:25:45.510787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.381 [2024-11-19 11:25:45.510792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.381 [2024-11-19 11:25:45.510801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-11-19 11:25:45.520726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.381 [2024-11-19 11:25:45.520772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.381 [2024-11-19 11:25:45.520782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.381 [2024-11-19 11:25:45.520787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.381 [2024-11-19 11:25:45.520792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.381 [2024-11-19 11:25:45.520802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.381 [2024-11-19 11:25:45.530776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.381 [2024-11-19 11:25:45.530817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.381 [2024-11-19 11:25:45.530827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.381 [2024-11-19 11:25:45.530832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.381 [2024-11-19 11:25:45.530836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.381 [2024-11-19 11:25:45.530846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.381 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.540668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.540714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.540726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.540731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.540735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.540745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.550684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.550723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.550733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.550738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.550742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.550752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.560713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.560754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.560764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.560769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.560773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.560783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.570897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.570942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.570952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.570957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.570961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.570971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.580916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.580956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.580966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.580974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.580978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.580988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.590795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.590834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.590844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.590849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.590853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.590866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.600828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.600874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.600883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.600888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.600893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.600903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.611006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.611048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.611057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.611062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.611067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.611077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.621007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.621053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.621062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.621067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.621072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.621084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.631047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.631091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.631100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.631105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.631110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.631119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.641077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.641147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.641156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.641161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.641165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.641175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.651036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.651121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.382 [2024-11-19 11:25:45.651130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.382 [2024-11-19 11:25:45.651135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.382 [2024-11-19 11:25:45.651139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.382 [2024-11-19 11:25:45.651149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.382 qpair failed and we were unable to recover it. 00:31:37.382 [2024-11-19 11:25:45.661129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.382 [2024-11-19 11:25:45.661167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.383 [2024-11-19 11:25:45.661176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.383 [2024-11-19 11:25:45.661181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.383 [2024-11-19 11:25:45.661185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.383 [2024-11-19 11:25:45.661195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-11-19 11:25:45.671140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.383 [2024-11-19 11:25:45.671197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.383 [2024-11-19 11:25:45.671207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.383 [2024-11-19 11:25:45.671212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.383 [2024-11-19 11:25:45.671216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.383 [2024-11-19 11:25:45.671226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-11-19 11:25:45.681190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.383 [2024-11-19 11:25:45.681267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.383 [2024-11-19 11:25:45.681277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.383 [2024-11-19 11:25:45.681282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.383 [2024-11-19 11:25:45.681286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.383 [2024-11-19 11:25:45.681296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-11-19 11:25:45.691206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.383 [2024-11-19 11:25:45.691251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.383 [2024-11-19 11:25:45.691261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.383 [2024-11-19 11:25:45.691266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.383 [2024-11-19 11:25:45.691271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.383 [2024-11-19 11:25:45.691280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-11-19 11:25:45.701228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.383 [2024-11-19 11:25:45.701270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.383 [2024-11-19 11:25:45.701280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.383 [2024-11-19 11:25:45.701285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.383 [2024-11-19 11:25:45.701289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.383 [2024-11-19 11:25:45.701299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-11-19 11:25:45.711290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.383 [2024-11-19 11:25:45.711334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.383 [2024-11-19 11:25:45.711343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.383 [2024-11-19 11:25:45.711350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.383 [2024-11-19 11:25:45.711355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.383 [2024-11-19 11:25:45.711365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.383 [2024-11-19 11:25:45.721134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.383 [2024-11-19 11:25:45.721211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.383 [2024-11-19 11:25:45.721220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.383 [2024-11-19 11:25:45.721225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.383 [2024-11-19 11:25:45.721229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.383 [2024-11-19 11:25:45.721239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.383 qpair failed and we were unable to recover it. 00:31:37.644 [2024-11-19 11:25:45.731373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.644 [2024-11-19 11:25:45.731418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.644 [2024-11-19 11:25:45.731427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.644 [2024-11-19 11:25:45.731432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.644 [2024-11-19 11:25:45.731436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:31:37.644 [2024-11-19 11:25:45.731446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:37.644 qpair failed and we were unable to recover it. 00:31:37.644 [2024-11-19 11:25:45.731594] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:37.644 A controller has encountered a failure and is being reset. 00:31:37.644 Controller properly reset. 00:31:37.644 Initializing NVMe Controllers 00:31:37.644 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:37.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:37.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:37.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:37.644 Initialization complete. Launching workers. 00:31:37.644 Starting thread on core 1 00:31:37.644 Starting thread on core 2 00:31:37.644 Starting thread on core 3 00:31:37.644 Starting thread on core 0 00:31:37.644 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:37.644 00:31:37.644 real 0m11.386s 00:31:37.644 user 0m21.608s 00:31:37.644 sys 0m3.720s 00:31:37.644 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.644 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:37.644 ************************************ 00:31:37.644 END TEST nvmf_target_disconnect_tc2 00:31:37.644 ************************************ 00:31:37.644 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:37.644 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:37.644 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:37.645 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.645 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:37.645 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.645 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:37.645 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.645 11:25:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.645 rmmod nvme_tcp 00:31:37.645 rmmod nvme_fabrics 00:31:37.906 rmmod nvme_keyring 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 146850 ']' 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 146850 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 146850 ']' 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 146850 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146850 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146850' 00:31:37.906 killing process with pid 146850 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 146850 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 146850 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.906 11:25:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.454 11:25:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.454 00:31:40.454 real 0m22.299s 00:31:40.454 user 0m49.700s 00:31:40.454 sys 0m10.223s 00:31:40.454 11:25:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.454 11:25:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:40.454 ************************************ 00:31:40.454 END TEST nvmf_target_disconnect 00:31:40.454 ************************************ 00:31:40.454 11:25:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:40.454 00:31:40.454 real 6m46.096s 00:31:40.454 user 11m30.429s 00:31:40.454 sys 2m23.757s 00:31:40.454 11:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.454 11:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.454 ************************************ 00:31:40.454 END TEST nvmf_host 00:31:40.454 ************************************ 00:31:40.454 11:25:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:40.454 11:25:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:40.454 11:25:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:40.454 11:25:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.454 11:25:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.454 11:25:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.454 ************************************ 00:31:40.454 START TEST nvmf_target_core_interrupt_mode 00:31:40.454 ************************************ 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:40.454 * Looking for test storage... 00:31:40.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.454 --rc genhtml_branch_coverage=1 00:31:40.454 --rc genhtml_function_coverage=1 00:31:40.454 --rc genhtml_legend=1 00:31:40.454 --rc geninfo_all_blocks=1 00:31:40.454 --rc geninfo_unexecuted_blocks=1 00:31:40.454 00:31:40.454 ' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.454 --rc genhtml_branch_coverage=1 00:31:40.454 --rc genhtml_function_coverage=1 00:31:40.454 --rc genhtml_legend=1 00:31:40.454 --rc geninfo_all_blocks=1 00:31:40.454 --rc geninfo_unexecuted_blocks=1 00:31:40.454 00:31:40.454 ' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.454 --rc genhtml_branch_coverage=1 00:31:40.454 --rc genhtml_function_coverage=1 00:31:40.454 --rc genhtml_legend=1 00:31:40.454 --rc geninfo_all_blocks=1 00:31:40.454 --rc geninfo_unexecuted_blocks=1 00:31:40.454 00:31:40.454 ' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.454 --rc genhtml_branch_coverage=1 00:31:40.454 --rc genhtml_function_coverage=1 00:31:40.454 --rc genhtml_legend=1 00:31:40.454 --rc geninfo_all_blocks=1 00:31:40.454 --rc geninfo_unexecuted_blocks=1 00:31:40.454 00:31:40.454 ' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.454 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.455 ************************************ 00:31:40.455 START TEST nvmf_abort 00:31:40.455 ************************************ 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:40.455 * Looking for test storage... 00:31:40.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.455 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.717 --rc genhtml_branch_coverage=1 00:31:40.717 --rc genhtml_function_coverage=1 00:31:40.717 --rc genhtml_legend=1 00:31:40.717 --rc geninfo_all_blocks=1 00:31:40.717 --rc geninfo_unexecuted_blocks=1 00:31:40.717 00:31:40.717 ' 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.717 --rc genhtml_branch_coverage=1 00:31:40.717 --rc genhtml_function_coverage=1 00:31:40.717 --rc genhtml_legend=1 00:31:40.717 --rc geninfo_all_blocks=1 00:31:40.717 --rc geninfo_unexecuted_blocks=1 00:31:40.717 00:31:40.717 ' 00:31:40.717 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.718 --rc genhtml_branch_coverage=1 00:31:40.718 --rc genhtml_function_coverage=1 00:31:40.718 --rc genhtml_legend=1 00:31:40.718 --rc geninfo_all_blocks=1 00:31:40.718 --rc geninfo_unexecuted_blocks=1 00:31:40.718 00:31:40.718 ' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.718 --rc genhtml_branch_coverage=1 00:31:40.718 --rc genhtml_function_coverage=1 00:31:40.718 --rc genhtml_legend=1 00:31:40.718 --rc geninfo_all_blocks=1 00:31:40.718 --rc geninfo_unexecuted_blocks=1 00:31:40.718 00:31:40.718 ' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.718 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:48.859 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:48.859 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:48.859 Found net devices under 0000:31:00.0: cvl_0_0 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:48.859 Found net devices under 0000:31:00.1: cvl_0_1 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.859 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.860 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.860 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.860 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.860 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.860 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.860 11:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.860 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.860 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.860 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.860 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:31:49.121 00:31:49.121 --- 10.0.0.2 ping statistics --- 00:31:49.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.121 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:31:49.121 00:31:49.121 --- 10.0.0.1 ping statistics --- 00:31:49.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.121 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=152952 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 152952 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 152952 ']' 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.121 11:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.121 [2024-11-19 11:25:57.411549] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:49.121 [2024-11-19 11:25:57.412709] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:31:49.121 [2024-11-19 11:25:57.412761] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.383 [2024-11-19 11:25:57.521276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:49.383 [2024-11-19 11:25:57.572828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.383 [2024-11-19 11:25:57.572885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.383 [2024-11-19 11:25:57.572894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.383 [2024-11-19 11:25:57.572901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.383 [2024-11-19 11:25:57.572907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.383 [2024-11-19 11:25:57.574937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.383 [2024-11-19 11:25:57.575105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.383 [2024-11-19 11:25:57.575215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.383 [2024-11-19 11:25:57.650301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:49.383 [2024-11-19 11:25:57.650446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:49.383 [2024-11-19 11:25:57.651215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:49.383 [2024-11-19 11:25:57.651460] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.954 [2024-11-19 11:25:58.248178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.954 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.214 Malloc0 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.214 Delay0 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.214 [2024-11-19 11:25:58.352087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.214 11:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:50.214 [2024-11-19 11:25:58.476614] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:52.760 Initializing NVMe Controllers 00:31:52.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:52.760 controller IO queue size 128 less than required 00:31:52.760 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:52.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:52.760 Initialization complete. Launching workers. 00:31:52.760 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29211 00:31:52.760 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29268, failed to submit 66 00:31:52.760 success 29211, unsuccessful 57, failed 0 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.760 rmmod nvme_tcp 00:31:52.760 rmmod nvme_fabrics 00:31:52.760 rmmod nvme_keyring 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 152952 ']' 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 152952 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 152952 ']' 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 152952 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152952 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152952' 00:31:52.760 killing process with pid 152952 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 152952 00:31:52.760 11:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 152952 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.760 11:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.312 00:31:55.312 real 0m14.410s 00:31:55.312 user 0m11.684s 00:31:55.312 sys 0m7.574s 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.312 ************************************ 00:31:55.312 END TEST nvmf_abort 00:31:55.312 ************************************ 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:55.312 ************************************ 00:31:55.312 START TEST nvmf_ns_hotplug_stress 00:31:55.312 ************************************ 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:55.312 * Looking for test storage... 00:31:55.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:55.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.312 --rc genhtml_branch_coverage=1 00:31:55.312 --rc genhtml_function_coverage=1 00:31:55.312 --rc genhtml_legend=1 00:31:55.312 --rc geninfo_all_blocks=1 00:31:55.312 --rc geninfo_unexecuted_blocks=1 00:31:55.312 00:31:55.312 ' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:55.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.312 --rc genhtml_branch_coverage=1 00:31:55.312 --rc genhtml_function_coverage=1 00:31:55.312 --rc genhtml_legend=1 00:31:55.312 --rc geninfo_all_blocks=1 00:31:55.312 --rc geninfo_unexecuted_blocks=1 00:31:55.312 00:31:55.312 ' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:55.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.312 --rc genhtml_branch_coverage=1 00:31:55.312 --rc genhtml_function_coverage=1 00:31:55.312 --rc genhtml_legend=1 00:31:55.312 --rc geninfo_all_blocks=1 00:31:55.312 --rc geninfo_unexecuted_blocks=1 00:31:55.312 00:31:55.312 ' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:55.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.312 --rc genhtml_branch_coverage=1 00:31:55.312 --rc genhtml_function_coverage=1 00:31:55.312 --rc genhtml_legend=1 00:31:55.312 --rc geninfo_all_blocks=1 00:31:55.312 --rc geninfo_unexecuted_blocks=1 00:31:55.312 00:31:55.312 ' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.312 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:55.313 11:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:03.458 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:03.458 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:03.458 Found net devices under 0000:31:00.0: cvl_0_0 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.458 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:03.459 Found net devices under 0000:31:00.1: cvl_0_1 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:03.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:32:03.459 00:32:03.459 --- 10.0.0.2 ping statistics --- 00:32:03.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.459 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:32:03.459 00:32:03.459 --- 10.0.0.1 ping statistics --- 00:32:03.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.459 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:03.459 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=158321 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 158321 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 158321 ']' 00:32:03.720 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.721 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.721 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.721 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.721 11:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:03.721 [2024-11-19 11:26:11.902729] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:03.721 [2024-11-19 11:26:11.903874] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:32:03.721 [2024-11-19 11:26:11.903925] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.721 [2024-11-19 11:26:12.015036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:03.721 [2024-11-19 11:26:12.065649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.721 [2024-11-19 11:26:12.065700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.721 [2024-11-19 11:26:12.065708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.721 [2024-11-19 11:26:12.065716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.721 [2024-11-19 11:26:12.065722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.721 [2024-11-19 11:26:12.067576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:03.721 [2024-11-19 11:26:12.067741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.721 [2024-11-19 11:26:12.067741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:03.982 [2024-11-19 11:26:12.144407] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:03.982 [2024-11-19 11:26:12.144484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:03.982 [2024-11-19 11:26:12.144971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:03.982 [2024-11-19 11:26:12.145310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:04.555 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:04.555 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:32:04.555 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:04.555 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:04.555 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:04.555 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.555 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:04.555 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:04.816 [2024-11-19 11:26:12.916630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.816 11:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:04.816 11:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.077 [2024-11-19 11:26:13.293335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.077 11:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:05.338 11:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:05.338 Malloc0 00:32:05.339 11:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:05.599 Delay0 00:32:05.599 11:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:05.866 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:05.866 NULL1 00:32:05.866 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:06.127 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=158873 00:32:06.127 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:06.127 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:06.127 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.388 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.649 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:06.649 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:06.649 true 00:32:06.649 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:06.649 11:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.909 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.170 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:07.170 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:07.170 true 00:32:07.431 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:07.431 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.432 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.693 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:07.693 11:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:07.954 true 00:32:07.954 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:07.954 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.954 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.214 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:08.214 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:08.474 true 00:32:08.474 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:08.474 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:08.735 11:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.735 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:08.735 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:08.996 true 00:32:08.996 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:08.996 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.257 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.257 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:09.257 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:09.517 true 00:32:09.518 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:09.518 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.779 11:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.040 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:10.040 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:10.040 true 00:32:10.040 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:10.040 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.301 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.561 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:10.561 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:10.561 true 00:32:10.561 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:10.561 11:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.821 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.082 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:11.082 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:11.347 true 00:32:11.347 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:11.347 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.347 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.609 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:11.609 11:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:11.869 true 00:32:11.870 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:11.870 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.870 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.130 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:12.130 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:12.391 true 00:32:12.391 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:12.391 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.652 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.652 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:12.652 11:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:12.913 true 00:32:12.913 11:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:12.913 11:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.175 11:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.175 11:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:13.175 11:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:13.436 true 00:32:13.436 11:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:13.436 11:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.696 11:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.956 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:13.956 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:13.956 true 00:32:13.956 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:13.956 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.217 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.478 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:14.478 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:14.478 true 00:32:14.478 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:14.478 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.740 11:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:15.001 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:15.001 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:15.001 true 00:32:15.262 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:15.262 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.262 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:15.522 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:15.522 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:15.783 true 00:32:15.783 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:15.783 11:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.783 11:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.043 11:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:16.043 11:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:16.303 true 00:32:16.304 11:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:16.304 11:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.565 11:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.565 11:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:16.565 11:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:16.826 true 00:32:16.826 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:16.826 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.087 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.087 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:17.087 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:17.347 true 00:32:17.347 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:17.347 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.607 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.867 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:17.867 11:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:17.867 true 00:32:17.867 11:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:17.867 11:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.129 11:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.406 11:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:18.406 11:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:18.406 true 00:32:18.406 11:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:18.406 11:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.761 11:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.761 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:18.761 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:19.048 true 00:32:19.048 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:19.048 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.310 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:19.310 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:19.310 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:19.571 true 00:32:19.571 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:19.571 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.832 11:26:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:19.832 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:19.832 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:20.093 true 00:32:20.093 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:20.094 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.355 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.615 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:20.615 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:20.615 true 00:32:20.615 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:20.615 11:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.876 11:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.135 11:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:21.135 11:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:21.135 true 00:32:21.135 11:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:21.135 11:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.396 11:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.658 11:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:21.658 11:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:21.919 true 00:32:21.919 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:21.919 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.919 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.181 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:22.181 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:22.181 true 00:32:22.442 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:22.442 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.442 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.703 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:22.703 11:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:22.964 true 00:32:22.964 11:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:22.964 11:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.964 11:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.225 11:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:32:23.225 11:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:32:23.486 true 00:32:23.486 11:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:23.486 11:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.747 11:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.747 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:32:23.747 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:32:24.008 true 00:32:24.008 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:24.008 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.269 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.269 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:32:24.269 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:32:24.531 true 00:32:24.531 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:24.531 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.792 11:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.054 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:32:25.054 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:32:25.054 true 00:32:25.054 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:25.054 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.316 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.578 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:32:25.578 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:32:25.578 true 00:32:25.839 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:25.839 11:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.839 11:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.101 11:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:32:26.101 11:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:32:26.363 true 00:32:26.363 11:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:26.363 11:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.363 11:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.624 11:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:32:26.624 11:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:32:26.886 true 00:32:26.886 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:26.886 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:27.146 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:27.146 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:32:27.146 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:32:27.407 true 00:32:27.407 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:27.407 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:27.668 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:27.668 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:32:27.668 11:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:32:27.929 true 00:32:27.929 11:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:27.929 11:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.189 11:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.189 11:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:32:28.189 11:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:32:28.450 true 00:32:28.450 11:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:28.450 11:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.711 11:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.711 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:32:28.711 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:32:28.972 true 00:32:28.972 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:28.972 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.233 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:29.493 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:32:29.493 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:32:29.493 true 00:32:29.493 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:29.493 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.754 11:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.015 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:32:30.015 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:32:30.015 true 00:32:30.015 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:30.015 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.275 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.536 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:32:30.536 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:32:30.536 true 00:32:30.798 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:30.798 11:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.798 11:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.059 11:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:32:31.059 11:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:32:31.319 true 00:32:31.319 11:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:31.319 11:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.319 11:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.581 11:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:32:31.581 11:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:32:31.843 true 00:32:31.843 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:31.843 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.104 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.104 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:32:32.104 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:32:32.366 true 00:32:32.366 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:32.366 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.628 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.628 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:32:32.628 11:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:32:32.890 true 00:32:32.890 11:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:32.890 11:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.151 11:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.412 11:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:32:33.412 11:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:32:33.412 true 00:32:33.412 11:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:33.412 11:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.673 11:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.934 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:32:33.934 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:32:33.934 true 00:32:33.934 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:33.934 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.195 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.456 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:32:34.456 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:32:34.456 true 00:32:34.456 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:34.456 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.718 11:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.980 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:32:34.980 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:32:34.980 true 00:32:35.241 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:35.241 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.241 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.502 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:32:35.502 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:32:35.763 true 00:32:35.763 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:35.763 11:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.763 11:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.024 11:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:32:36.024 11:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:32:36.285 true 00:32:36.285 11:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:36.285 11:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.285 Initializing NVMe Controllers 00:32:36.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:36.285 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:32:36.285 Controller IO queue size 128, less than required. 00:32:36.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:36.285 WARNING: Some requested NVMe devices were skipped 00:32:36.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:36.285 Initialization complete. Launching workers. 00:32:36.285 ======================================================== 00:32:36.285 Latency(us) 00:32:36.285 Device Information : IOPS MiB/s Average min max 00:32:36.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29788.79 14.55 4296.84 1498.45 11089.10 00:32:36.285 ======================================================== 00:32:36.285 Total : 29788.79 14.55 4296.84 1498.45 11089.10 00:32:36.285 00:32:36.546 11:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.546 11:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:32:36.546 11:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:32:36.807 true 00:32:36.807 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 158873 00:32:36.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (158873) - No such process 00:32:36.807 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 158873 00:32:36.807 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.068 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:37.068 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:37.068 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:37.068 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:37.068 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:37.068 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:37.329 null0 00:32:37.329 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:37.329 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:37.329 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:37.591 null1 00:32:37.591 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:37.591 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:37.591 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:37.591 null2 00:32:37.591 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:37.591 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:37.591 11:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:37.853 null3 00:32:37.853 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:37.853 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:37.853 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:38.114 null4 00:32:38.114 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:38.114 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:38.114 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:38.114 null5 00:32:38.114 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:38.114 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:38.114 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:38.375 null6 00:32:38.375 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:38.375 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:38.375 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:38.636 null7 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:38.636 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 165185 165187 165188 165190 165192 165194 165196 165197 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.637 11:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.897 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.158 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.159 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.420 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.681 11:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.681 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:39.681 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.681 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.681 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.942 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:39.943 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:40.203 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.203 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.203 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:40.203 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.203 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:40.204 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:40.465 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:40.726 11:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:40.726 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:40.726 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.726 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.726 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:40.726 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:40.726 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.726 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.726 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:40.989 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.251 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.513 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:41.776 11:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:41.776 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:41.776 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:41.776 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:41.776 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:41.776 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:42.048 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.309 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.569 rmmod nvme_tcp 00:32:42.569 rmmod nvme_fabrics 00:32:42.569 rmmod nvme_keyring 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 158321 ']' 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 158321 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 158321 ']' 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 158321 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158321 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158321' 00:32:42.569 killing process with pid 158321 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 158321 00:32:42.569 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 158321 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.831 11:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.744 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.744 00:32:44.744 real 0m49.889s 00:32:44.744 user 3m4.359s 00:32:44.744 sys 0m22.170s 00:32:44.744 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.744 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:44.744 ************************************ 00:32:44.744 END TEST nvmf_ns_hotplug_stress 00:32:44.744 ************************************ 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:45.006 ************************************ 00:32:45.006 START TEST nvmf_delete_subsystem 00:32:45.006 ************************************ 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:45.006 * Looking for test storage... 00:32:45.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:45.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.006 --rc genhtml_branch_coverage=1 00:32:45.006 --rc genhtml_function_coverage=1 00:32:45.006 --rc genhtml_legend=1 00:32:45.006 --rc geninfo_all_blocks=1 00:32:45.006 --rc geninfo_unexecuted_blocks=1 00:32:45.006 00:32:45.006 ' 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:45.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.006 --rc genhtml_branch_coverage=1 00:32:45.006 --rc genhtml_function_coverage=1 00:32:45.006 --rc genhtml_legend=1 00:32:45.006 --rc geninfo_all_blocks=1 00:32:45.006 --rc geninfo_unexecuted_blocks=1 00:32:45.006 00:32:45.006 ' 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:45.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.006 --rc genhtml_branch_coverage=1 00:32:45.006 --rc genhtml_function_coverage=1 00:32:45.006 --rc genhtml_legend=1 00:32:45.006 --rc geninfo_all_blocks=1 00:32:45.006 --rc geninfo_unexecuted_blocks=1 00:32:45.006 00:32:45.006 ' 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:45.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.006 --rc genhtml_branch_coverage=1 00:32:45.006 --rc genhtml_function_coverage=1 00:32:45.006 --rc genhtml_legend=1 00:32:45.006 --rc geninfo_all_blocks=1 00:32:45.006 --rc geninfo_unexecuted_blocks=1 00:32:45.006 00:32:45.006 ' 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.006 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.268 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.269 11:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:53.416 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:53.417 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:53.417 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:53.417 Found net devices under 0000:31:00.0: cvl_0_0 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:53.417 Found net devices under 0000:31:00.1: cvl_0_1 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:53.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:32:53.417 00:32:53.417 --- 10.0.0.2 ping statistics --- 00:32:53.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.417 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:53.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:32:53.417 00:32:53.417 --- 10.0.0.1 ping statistics --- 00:32:53.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.417 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=170760 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 170760 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 170760 ']' 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.417 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.418 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.418 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.418 11:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:53.418 [2024-11-19 11:27:01.487251] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:53.418 [2024-11-19 11:27:01.488431] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:32:53.418 [2024-11-19 11:27:01.488484] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.418 [2024-11-19 11:27:01.581024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:53.418 [2024-11-19 11:27:01.621342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.418 [2024-11-19 11:27:01.621380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.418 [2024-11-19 11:27:01.621388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.418 [2024-11-19 11:27:01.621396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.418 [2024-11-19 11:27:01.621401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.418 [2024-11-19 11:27:01.622654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.418 [2024-11-19 11:27:01.622656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.418 [2024-11-19 11:27:01.678575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:53.418 [2024-11-19 11:27:01.679076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:53.418 [2024-11-19 11:27:01.679416] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:53.990 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.990 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:53.990 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.990 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.990 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:53.990 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.990 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:53.990 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:54.251 [2024-11-19 11:27:02.347260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:54.251 [2024-11-19 11:27:02.375941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:54.251 NULL1 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:54.251 Delay0 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=171020 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:54.251 11:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:54.251 [2024-11-19 11:27:02.471883] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:56.162 11:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:56.162 11:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.162 11:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:56.423 Write completed with error (sct=0, sc=8) 00:32:56.423 Read completed with error (sct=0, sc=8) 00:32:56.423 starting I/O failed: -6 00:32:56.423 Read completed with error (sct=0, sc=8) 00:32:56.423 Write completed with error (sct=0, sc=8) 00:32:56.423 Read completed with error (sct=0, sc=8) 00:32:56.423 Read completed with error (sct=0, sc=8) 00:32:56.423 starting I/O failed: -6 00:32:56.423 Read completed with error (sct=0, sc=8) 00:32:56.423 Read completed with error (sct=0, sc=8) 00:32:56.423 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 starting I/O failed: -6 00:32:56.424 Read completed with error (sct=0, sc=8) 00:32:56.424 Write completed with error (sct=0, sc=8) 00:32:56.425 [2024-11-19 11:27:04.553425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e24000c40 is same with the state(6) to be set 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Read completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:56.425 Write completed with error (sct=0, sc=8) 00:32:57.368 [2024-11-19 11:27:05.529254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e5e0 is same with the state(6) to be set 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 [2024-11-19 11:27:05.555956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e2400d7e0 is same with the state(6) to be set 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 [2024-11-19 11:27:05.556072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e2400d020 is same with the state(6) to be set 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Write completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.368 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 [2024-11-19 11:27:05.556458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d0e0 is same with the state(6) to be set 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Write completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 Read completed with error (sct=0, sc=8) 00:32:57.369 [2024-11-19 11:27:05.556763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69d4a0 is same with the state(6) to be set 00:32:57.369 Initializing NVMe Controllers 00:32:57.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.369 Controller IO queue size 128, less than required. 00:32:57.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:57.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:57.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:57.369 Initialization complete. Launching workers. 00:32:57.369 ======================================================== 00:32:57.369 Latency(us) 00:32:57.369 Device Information : IOPS MiB/s Average min max 00:32:57.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.71 0.09 904013.25 313.38 1009397.53 00:32:57.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.37 0.07 987683.30 275.60 2001101.79 00:32:57.369 ======================================================== 00:32:57.369 Total : 331.08 0.16 941004.22 275.60 2001101.79 00:32:57.369 00:32:57.369 [2024-11-19 11:27:05.557058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69e5e0 (9): Bad file descriptor 00:32:57.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:57.369 11:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.369 11:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:57.369 11:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 171020 00:32:57.369 11:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 171020 00:32:57.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (171020) - No such process 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 171020 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 171020 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 171020 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:57.942 [2024-11-19 11:27:06.091585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=171761 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 171761 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:57.942 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:57.942 [2024-11-19 11:27:06.169955] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:58.515 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:58.515 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 171761 00:32:58.515 11:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:58.777 11:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:58.777 11:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 171761 00:32:58.777 11:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:59.350 11:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:59.350 11:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 171761 00:32:59.350 11:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:59.923 11:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:59.923 11:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 171761 00:32:59.923 11:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:00.496 11:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:00.496 11:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 171761 00:33:00.496 11:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:01.068 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:01.068 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 171761 00:33:01.068 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:01.068 Initializing NVMe Controllers 00:33:01.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:01.068 Controller IO queue size 128, less than required. 00:33:01.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:01.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:01.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:01.068 Initialization complete. Launching workers. 00:33:01.068 ======================================================== 00:33:01.068 Latency(us) 00:33:01.068 Device Information : IOPS MiB/s Average min max 00:33:01.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002390.12 1000305.50 1041542.18 00:33:01.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003965.27 1000418.85 1011042.20 00:33:01.068 ======================================================== 00:33:01.068 Total : 256.00 0.12 1003177.70 1000305.50 1041542.18 00:33:01.068 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 171761 00:33:01.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (171761) - No such process 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 171761 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:01.329 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:01.329 rmmod nvme_tcp 00:33:01.329 rmmod nvme_fabrics 00:33:01.590 rmmod nvme_keyring 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 170760 ']' 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 170760 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 170760 ']' 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 170760 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170760 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170760' 00:33:01.590 killing process with pid 170760 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 170760 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 170760 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.590 11:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.157 11:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:04.157 00:33:04.157 real 0m18.832s 00:33:04.157 user 0m26.494s 00:33:04.157 sys 0m7.759s 00:33:04.157 11:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.157 11:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:04.157 ************************************ 00:33:04.157 END TEST nvmf_delete_subsystem 00:33:04.157 ************************************ 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:04.157 ************************************ 00:33:04.157 START TEST nvmf_host_management 00:33:04.157 ************************************ 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:04.157 * Looking for test storage... 00:33:04.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:04.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.157 --rc genhtml_branch_coverage=1 00:33:04.157 --rc genhtml_function_coverage=1 00:33:04.157 --rc genhtml_legend=1 00:33:04.157 --rc geninfo_all_blocks=1 00:33:04.157 --rc geninfo_unexecuted_blocks=1 00:33:04.157 00:33:04.157 ' 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:04.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.157 --rc genhtml_branch_coverage=1 00:33:04.157 --rc genhtml_function_coverage=1 00:33:04.157 --rc genhtml_legend=1 00:33:04.157 --rc geninfo_all_blocks=1 00:33:04.157 --rc geninfo_unexecuted_blocks=1 00:33:04.157 00:33:04.157 ' 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:04.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.157 --rc genhtml_branch_coverage=1 00:33:04.157 --rc genhtml_function_coverage=1 00:33:04.157 --rc genhtml_legend=1 00:33:04.157 --rc geninfo_all_blocks=1 00:33:04.157 --rc geninfo_unexecuted_blocks=1 00:33:04.157 00:33:04.157 ' 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:04.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.157 --rc genhtml_branch_coverage=1 00:33:04.157 --rc genhtml_function_coverage=1 00:33:04.157 --rc genhtml_legend=1 00:33:04.157 --rc geninfo_all_blocks=1 00:33:04.157 --rc geninfo_unexecuted_blocks=1 00:33:04.157 00:33:04.157 ' 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.157 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:04.158 11:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:12.350 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:12.351 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:12.351 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:12.351 Found net devices under 0000:31:00.0: cvl_0_0 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:12.351 Found net devices under 0000:31:00.1: cvl_0_1 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:33:12.351 00:33:12.351 --- 10.0.0.2 ping statistics --- 00:33:12.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.351 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:33:12.351 00:33:12.351 --- 10.0.0.1 ping statistics --- 00:33:12.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.351 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=177568 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 177568 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:12.351 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 177568 ']' 00:33:12.352 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.352 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.352 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.352 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.352 11:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:12.614 [2024-11-19 11:27:20.715006] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:12.614 [2024-11-19 11:27:20.716196] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:33:12.614 [2024-11-19 11:27:20.716248] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.614 [2024-11-19 11:27:20.828263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:12.614 [2024-11-19 11:27:20.880707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.614 [2024-11-19 11:27:20.880759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.614 [2024-11-19 11:27:20.880769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.614 [2024-11-19 11:27:20.880776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.614 [2024-11-19 11:27:20.880782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.614 [2024-11-19 11:27:20.882860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.614 [2024-11-19 11:27:20.883024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:12.614 [2024-11-19 11:27:20.883300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:12.614 [2024-11-19 11:27:20.883301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.614 [2024-11-19 11:27:20.958337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:12.614 [2024-11-19 11:27:20.958956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:12.614 [2024-11-19 11:27:20.959939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:12.614 [2024-11-19 11:27:20.960000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:12.614 [2024-11-19 11:27:20.960185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 [2024-11-19 11:27:21.608335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 Malloc0 00:33:13.556 [2024-11-19 11:27:21.700578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=177680 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 177680 /var/tmp/bdevperf.sock 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 177680 ']' 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:13.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:13.556 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:13.557 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:13.557 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:13.557 { 00:33:13.557 "params": { 00:33:13.557 "name": "Nvme$subsystem", 00:33:13.557 "trtype": "$TEST_TRANSPORT", 00:33:13.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:13.557 "adrfam": "ipv4", 00:33:13.557 "trsvcid": "$NVMF_PORT", 00:33:13.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:13.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:13.557 "hdgst": ${hdgst:-false}, 00:33:13.557 "ddgst": ${ddgst:-false} 00:33:13.557 }, 00:33:13.557 "method": "bdev_nvme_attach_controller" 00:33:13.557 } 00:33:13.557 EOF 00:33:13.557 )") 00:33:13.557 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:13.557 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:13.557 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:13.557 11:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:13.557 "params": { 00:33:13.557 "name": "Nvme0", 00:33:13.557 "trtype": "tcp", 00:33:13.557 "traddr": "10.0.0.2", 00:33:13.557 "adrfam": "ipv4", 00:33:13.557 "trsvcid": "4420", 00:33:13.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:13.557 "hdgst": false, 00:33:13.557 "ddgst": false 00:33:13.557 }, 00:33:13.557 "method": "bdev_nvme_attach_controller" 00:33:13.557 }' 00:33:13.557 [2024-11-19 11:27:21.805082] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:33:13.557 [2024-11-19 11:27:21.805136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177680 ] 00:33:13.557 [2024-11-19 11:27:21.887808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.818 [2024-11-19 11:27:21.924475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.818 Running I/O for 10 seconds... 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=989 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 989 -ge 100 ']' 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:14.392 [2024-11-19 11:27:22.667957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b8800 is same with the state(6) to be set 00:33:14.392 [2024-11-19 11:27:22.668001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b8800 is same with the state(6) to be set 00:33:14.392 [2024-11-19 11:27:22.668011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b8800 is same with the state(6) to be set 00:33:14.392 [2024-11-19 11:27:22.668018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b8800 is same with the state(6) to be set 00:33:14.392 [2024-11-19 11:27:22.668024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b8800 is same with the state(6) to be set 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:14.392 [2024-11-19 11:27:22.678442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:14.392 [2024-11-19 11:27:22.678476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.678487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:14.392 [2024-11-19 11:27:22.678495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.678503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:14.392 [2024-11-19 11:27:22.678510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.678523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:14.392 [2024-11-19 11:27:22.678531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.678538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9b00 is same with the state(6) to be set 00:33:14.392 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.392 [2024-11-19 11:27:22.685142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.392 [2024-11-19 11:27:22.685165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.685180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.392 [2024-11-19 11:27:22.685187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.685197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.392 [2024-11-19 11:27:22.685204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.685214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.392 [2024-11-19 11:27:22.685221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.685230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.392 [2024-11-19 11:27:22.685237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 11:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:14.392 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.685249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.392 [2024-11-19 11:27:22.685257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.685266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.392 [2024-11-19 11:27:22.685274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.392 [2024-11-19 11:27:22.685283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.393 [2024-11-19 11:27:22.685937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.393 [2024-11-19 11:27:22.685946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.685954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.685963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.685972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.685982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.685989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.685998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.686231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.394 [2024-11-19 11:27:22.686238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.394 [2024-11-19 11:27:22.687459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:14.394 task offset: 8192 on job bdev=Nvme0n1 fails 00:33:14.394 00:33:14.394 Latency(us) 00:33:14.394 [2024-11-19T10:27:22.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.394 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:14.394 Job: Nvme0n1 ended in about 0.62 seconds with error 00:33:14.394 Verification LBA range: start 0x0 length 0x400 00:33:14.394 Nvme0n1 : 0.62 1752.05 109.50 103.06 0.00 33679.64 1481.39 31238.83 00:33:14.394 [2024-11-19T10:27:22.746Z] =================================================================================================================== 00:33:14.394 [2024-11-19T10:27:22.746Z] Total : 1752.05 109.50 103.06 0.00 33679.64 1481.39 31238.83 00:33:14.394 [2024-11-19 11:27:22.689443] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:14.394 [2024-11-19 11:27:22.689464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d9b00 (9): Bad file descriptor 00:33:14.394 [2024-11-19 11:27:22.695354] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 177680 00:33:15.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (177680) - No such process 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:15.779 { 00:33:15.779 "params": { 00:33:15.779 "name": "Nvme$subsystem", 00:33:15.779 "trtype": "$TEST_TRANSPORT", 00:33:15.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.779 "adrfam": "ipv4", 00:33:15.779 "trsvcid": "$NVMF_PORT", 00:33:15.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.779 "hdgst": ${hdgst:-false}, 00:33:15.779 "ddgst": ${ddgst:-false} 00:33:15.779 }, 00:33:15.779 "method": "bdev_nvme_attach_controller" 00:33:15.779 } 00:33:15.779 EOF 00:33:15.779 )") 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:15.779 11:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:15.779 "params": { 00:33:15.779 "name": "Nvme0", 00:33:15.779 "trtype": "tcp", 00:33:15.779 "traddr": "10.0.0.2", 00:33:15.779 "adrfam": "ipv4", 00:33:15.779 "trsvcid": "4420", 00:33:15.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.779 "hdgst": false, 00:33:15.779 "ddgst": false 00:33:15.779 }, 00:33:15.779 "method": "bdev_nvme_attach_controller" 00:33:15.779 }' 00:33:15.779 [2024-11-19 11:27:23.755091] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:33:15.779 [2024-11-19 11:27:23.755148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178059 ] 00:33:15.779 [2024-11-19 11:27:23.831359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.779 [2024-11-19 11:27:23.867192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.040 Running I/O for 1 seconds... 00:33:16.982 1739.00 IOPS, 108.69 MiB/s 00:33:16.982 Latency(us) 00:33:16.982 [2024-11-19T10:27:25.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.982 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:16.982 Verification LBA range: start 0x0 length 0x400 00:33:16.982 Nvme0n1 : 1.01 1780.11 111.26 0.00 0.00 35261.61 2430.29 36044.80 00:33:16.982 [2024-11-19T10:27:25.334Z] =================================================================================================================== 00:33:16.982 [2024-11-19T10:27:25.334Z] Total : 1780.11 111.26 0.00 0.00 35261.61 2430.29 36044.80 00:33:16.982 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:16.982 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:16.983 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:16.983 rmmod nvme_tcp 00:33:17.243 rmmod nvme_fabrics 00:33:17.243 rmmod nvme_keyring 00:33:17.243 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:17.243 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:17.243 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 177568 ']' 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 177568 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 177568 ']' 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 177568 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 177568 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 177568' 00:33:17.244 killing process with pid 177568 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 177568 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 177568 00:33:17.244 [2024-11-19 11:27:25.559679] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:33:17.244 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:17.505 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:17.505 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:17.505 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.505 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.505 11:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:19.416 00:33:19.416 real 0m15.601s 00:33:19.416 user 0m19.509s 00:33:19.416 sys 0m8.358s 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.416 ************************************ 00:33:19.416 END TEST nvmf_host_management 00:33:19.416 ************************************ 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:19.416 ************************************ 00:33:19.416 START TEST nvmf_lvol 00:33:19.416 ************************************ 00:33:19.416 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:19.677 * Looking for test storage... 00:33:19.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:19.677 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:19.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.678 --rc genhtml_branch_coverage=1 00:33:19.678 --rc genhtml_function_coverage=1 00:33:19.678 --rc genhtml_legend=1 00:33:19.678 --rc geninfo_all_blocks=1 00:33:19.678 --rc geninfo_unexecuted_blocks=1 00:33:19.678 00:33:19.678 ' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:19.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.678 --rc genhtml_branch_coverage=1 00:33:19.678 --rc genhtml_function_coverage=1 00:33:19.678 --rc genhtml_legend=1 00:33:19.678 --rc geninfo_all_blocks=1 00:33:19.678 --rc geninfo_unexecuted_blocks=1 00:33:19.678 00:33:19.678 ' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:19.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.678 --rc genhtml_branch_coverage=1 00:33:19.678 --rc genhtml_function_coverage=1 00:33:19.678 --rc genhtml_legend=1 00:33:19.678 --rc geninfo_all_blocks=1 00:33:19.678 --rc geninfo_unexecuted_blocks=1 00:33:19.678 00:33:19.678 ' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:19.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.678 --rc genhtml_branch_coverage=1 00:33:19.678 --rc genhtml_function_coverage=1 00:33:19.678 --rc genhtml_legend=1 00:33:19.678 --rc geninfo_all_blocks=1 00:33:19.678 --rc geninfo_unexecuted_blocks=1 00:33:19.678 00:33:19.678 ' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:19.678 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:19.679 11:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:27.825 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.825 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:27.826 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:27.826 Found net devices under 0000:31:00.0: cvl_0_0 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:27.826 Found net devices under 0000:31:00.1: cvl_0_1 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:27.826 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:28.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:33:28.087 00:33:28.087 --- 10.0.0.2 ping statistics --- 00:33:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.087 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:33:28.087 00:33:28.087 --- 10.0.0.1 ping statistics --- 00:33:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.087 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:28.087 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=183171 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 183171 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 183171 ']' 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.348 11:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:28.348 [2024-11-19 11:27:36.538552] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:28.348 [2024-11-19 11:27:36.539707] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:33:28.348 [2024-11-19 11:27:36.539758] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.348 [2024-11-19 11:27:36.631985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:28.348 [2024-11-19 11:27:36.673452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.348 [2024-11-19 11:27:36.673489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.348 [2024-11-19 11:27:36.673498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.348 [2024-11-19 11:27:36.673505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.348 [2024-11-19 11:27:36.673511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.348 [2024-11-19 11:27:36.674898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.348 [2024-11-19 11:27:36.674986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.348 [2024-11-19 11:27:36.674990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.609 [2024-11-19 11:27:36.731015] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:28.609 [2024-11-19 11:27:36.731575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:28.609 [2024-11-19 11:27:36.731875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:28.609 [2024-11-19 11:27:36.732138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:29.180 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.180 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:29.180 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:29.180 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:29.180 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:29.180 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.180 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:29.180 [2024-11-19 11:27:37.528039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.440 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:29.440 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:29.440 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:29.701 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:29.701 11:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:29.962 11:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:29.962 11:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8571ec27-f932-49ae-9b32-a7547868cc83 00:33:29.962 11:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8571ec27-f932-49ae-9b32-a7547868cc83 lvol 20 00:33:30.223 11:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=007fa978-e92e-4f90-aa79-39169db83d37 00:33:30.223 11:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:30.483 11:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 007fa978-e92e-4f90-aa79-39169db83d37 00:33:30.483 11:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:30.744 [2024-11-19 11:27:38.939830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.744 11:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:31.009 11:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=183744 00:33:31.009 11:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:31.009 11:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:31.957 11:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 007fa978-e92e-4f90-aa79-39169db83d37 MY_SNAPSHOT 00:33:32.217 11:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4becbb68-c674-4cf5-98d0-0aafbc7bc8f7 00:33:32.217 11:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 007fa978-e92e-4f90-aa79-39169db83d37 30 00:33:32.478 11:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4becbb68-c674-4cf5-98d0-0aafbc7bc8f7 MY_CLONE 00:33:32.478 11:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ca86ff74-3408-4011-8679-de64d0fed625 00:33:32.478 11:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ca86ff74-3408-4011-8679-de64d0fed625 00:33:33.050 11:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 183744 00:33:41.191 Initializing NVMe Controllers 00:33:41.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:41.191 Controller IO queue size 128, less than required. 00:33:41.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:41.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:41.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:41.191 Initialization complete. Launching workers. 00:33:41.191 ======================================================== 00:33:41.191 Latency(us) 00:33:41.191 Device Information : IOPS MiB/s Average min max 00:33:41.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12336.20 48.19 10378.56 1626.61 57235.95 00:33:41.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15515.80 60.61 8251.40 3876.44 51698.81 00:33:41.191 ======================================================== 00:33:41.191 Total : 27852.00 108.80 9193.56 1626.61 57235.95 00:33:41.191 00:33:41.191 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:41.452 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 007fa978-e92e-4f90-aa79-39169db83d37 00:33:41.452 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8571ec27-f932-49ae-9b32-a7547868cc83 00:33:41.712 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.713 rmmod nvme_tcp 00:33:41.713 rmmod nvme_fabrics 00:33:41.713 rmmod nvme_keyring 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 183171 ']' 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 183171 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 183171 ']' 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 183171 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.713 11:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 183171 00:33:41.713 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:41.713 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:41.713 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 183171' 00:33:41.713 killing process with pid 183171 00:33:41.713 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 183171 00:33:41.713 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 183171 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.974 11:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.518 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.518 00:33:44.518 real 0m24.510s 00:33:44.518 user 0m55.554s 00:33:44.518 sys 0m11.234s 00:33:44.518 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.518 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:44.518 ************************************ 00:33:44.518 END TEST nvmf_lvol 00:33:44.518 ************************************ 00:33:44.518 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:44.518 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:44.518 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.518 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:44.519 ************************************ 00:33:44.519 START TEST nvmf_lvs_grow 00:33:44.519 ************************************ 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:44.519 * Looking for test storage... 00:33:44.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:44.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.519 --rc genhtml_branch_coverage=1 00:33:44.519 --rc genhtml_function_coverage=1 00:33:44.519 --rc genhtml_legend=1 00:33:44.519 --rc geninfo_all_blocks=1 00:33:44.519 --rc geninfo_unexecuted_blocks=1 00:33:44.519 00:33:44.519 ' 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:44.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.519 --rc genhtml_branch_coverage=1 00:33:44.519 --rc genhtml_function_coverage=1 00:33:44.519 --rc genhtml_legend=1 00:33:44.519 --rc geninfo_all_blocks=1 00:33:44.519 --rc geninfo_unexecuted_blocks=1 00:33:44.519 00:33:44.519 ' 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:44.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.519 --rc genhtml_branch_coverage=1 00:33:44.519 --rc genhtml_function_coverage=1 00:33:44.519 --rc genhtml_legend=1 00:33:44.519 --rc geninfo_all_blocks=1 00:33:44.519 --rc geninfo_unexecuted_blocks=1 00:33:44.519 00:33:44.519 ' 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:44.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.519 --rc genhtml_branch_coverage=1 00:33:44.519 --rc genhtml_function_coverage=1 00:33:44.519 --rc genhtml_legend=1 00:33:44.519 --rc geninfo_all_blocks=1 00:33:44.519 --rc geninfo_unexecuted_blocks=1 00:33:44.519 00:33:44.519 ' 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.519 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:44.520 11:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:52.661 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:52.661 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:52.661 Found net devices under 0000:31:00.0: cvl_0_0 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:52.661 Found net devices under 0000:31:00.1: cvl_0_1 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:52.661 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:52.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:33:52.662 00:33:52.662 --- 10.0.0.2 ping statistics --- 00:33:52.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.662 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:33:52.662 00:33:52.662 --- 10.0.0.1 ping statistics --- 00:33:52.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.662 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:52.662 11:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=190440 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 190440 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 190440 ']' 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:52.923 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:52.923 [2024-11-19 11:28:01.081090] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:52.923 [2024-11-19 11:28:01.082224] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:33:52.923 [2024-11-19 11:28:01.082272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.923 [2024-11-19 11:28:01.175530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.923 [2024-11-19 11:28:01.215603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.923 [2024-11-19 11:28:01.215642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.923 [2024-11-19 11:28:01.215651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.923 [2024-11-19 11:28:01.215657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.923 [2024-11-19 11:28:01.215663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.923 [2024-11-19 11:28:01.216254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.923 [2024-11-19 11:28:01.271900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:52.923 [2024-11-19 11:28:01.272159] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:53.865 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.865 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:53.865 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:53.865 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:53.865 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:53.865 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.865 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:53.865 [2024-11-19 11:28:02.073055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:53.865 ************************************ 00:33:53.865 START TEST lvs_grow_clean 00:33:53.865 ************************************ 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:53.865 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:54.126 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:54.126 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:54.386 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ae27559e-f307-4c8a-b272-3763cf43eceb 00:33:54.386 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:33:54.386 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:54.386 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:54.386 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:54.386 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae27559e-f307-4c8a-b272-3763cf43eceb lvol 150 00:33:54.646 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe14ca84-fb30-47f8-bd3b-1bd5e59b8126 00:33:54.646 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:54.646 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:54.907 [2024-11-19 11:28:03.000614] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:54.907 [2024-11-19 11:28:03.000680] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:54.907 true 00:33:54.907 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:33:54.907 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:54.907 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:54.907 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:55.167 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe14ca84-fb30-47f8-bd3b-1bd5e59b8126 00:33:55.167 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:55.427 [2024-11-19 11:28:03.672930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.427 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=191097 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 191097 /var/tmp/bdevperf.sock 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 191097 ']' 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:55.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:55.688 11:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:55.688 [2024-11-19 11:28:03.925022] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:33:55.688 [2024-11-19 11:28:03.925096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191097 ] 00:33:55.688 [2024-11-19 11:28:04.022752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.950 [2024-11-19 11:28:04.069485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.521 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:56.521 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:56.521 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:56.782 Nvme0n1 00:33:56.782 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:57.043 [ 00:33:57.043 { 00:33:57.043 "name": "Nvme0n1", 00:33:57.043 "aliases": [ 00:33:57.043 "fe14ca84-fb30-47f8-bd3b-1bd5e59b8126" 00:33:57.043 ], 00:33:57.043 "product_name": "NVMe disk", 00:33:57.043 "block_size": 4096, 00:33:57.043 "num_blocks": 38912, 00:33:57.043 "uuid": "fe14ca84-fb30-47f8-bd3b-1bd5e59b8126", 00:33:57.043 "numa_id": 0, 00:33:57.043 "assigned_rate_limits": { 00:33:57.043 "rw_ios_per_sec": 0, 00:33:57.043 "rw_mbytes_per_sec": 0, 00:33:57.043 "r_mbytes_per_sec": 0, 00:33:57.043 "w_mbytes_per_sec": 0 00:33:57.043 }, 00:33:57.043 "claimed": false, 00:33:57.043 "zoned": false, 00:33:57.043 "supported_io_types": { 00:33:57.043 "read": true, 00:33:57.043 "write": true, 00:33:57.043 "unmap": true, 00:33:57.043 "flush": true, 00:33:57.043 "reset": true, 00:33:57.043 "nvme_admin": true, 00:33:57.043 "nvme_io": true, 00:33:57.043 "nvme_io_md": false, 00:33:57.043 "write_zeroes": true, 00:33:57.043 "zcopy": false, 00:33:57.043 "get_zone_info": false, 00:33:57.043 "zone_management": false, 00:33:57.043 "zone_append": false, 00:33:57.043 "compare": true, 00:33:57.043 "compare_and_write": true, 00:33:57.043 "abort": true, 00:33:57.043 "seek_hole": false, 00:33:57.043 "seek_data": false, 00:33:57.043 "copy": true, 00:33:57.043 "nvme_iov_md": false 00:33:57.043 }, 00:33:57.043 "memory_domains": [ 00:33:57.043 { 00:33:57.043 "dma_device_id": "system", 00:33:57.043 "dma_device_type": 1 00:33:57.043 } 00:33:57.043 ], 00:33:57.043 "driver_specific": { 00:33:57.043 "nvme": [ 00:33:57.043 { 00:33:57.043 "trid": { 00:33:57.043 "trtype": "TCP", 00:33:57.043 "adrfam": "IPv4", 00:33:57.043 "traddr": "10.0.0.2", 00:33:57.043 "trsvcid": "4420", 00:33:57.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:57.043 }, 00:33:57.043 "ctrlr_data": { 00:33:57.043 "cntlid": 1, 00:33:57.043 "vendor_id": "0x8086", 00:33:57.043 "model_number": "SPDK bdev Controller", 00:33:57.043 "serial_number": "SPDK0", 00:33:57.043 "firmware_revision": "25.01", 00:33:57.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:57.043 "oacs": { 00:33:57.043 "security": 0, 00:33:57.043 "format": 0, 00:33:57.043 "firmware": 0, 00:33:57.043 "ns_manage": 0 00:33:57.043 }, 00:33:57.043 "multi_ctrlr": true, 00:33:57.043 "ana_reporting": false 00:33:57.043 }, 00:33:57.043 "vs": { 00:33:57.043 "nvme_version": "1.3" 00:33:57.043 }, 00:33:57.043 "ns_data": { 00:33:57.043 "id": 1, 00:33:57.043 "can_share": true 00:33:57.043 } 00:33:57.043 } 00:33:57.043 ], 00:33:57.043 "mp_policy": "active_passive" 00:33:57.043 } 00:33:57.043 } 00:33:57.043 ] 00:33:57.043 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=191260 00:33:57.043 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:57.043 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:57.043 Running I/O for 10 seconds... 00:33:58.444 Latency(us) 00:33:58.444 [2024-11-19T10:28:06.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:58.444 Nvme0n1 : 1.00 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:33:58.444 [2024-11-19T10:28:06.797Z] =================================================================================================================== 00:33:58.445 [2024-11-19T10:28:06.797Z] Total : 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:33:58.445 00:33:59.021 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:33:59.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:59.021 Nvme0n1 : 2.00 17845.00 69.71 0.00 0.00 0.00 0.00 0.00 00:33:59.021 [2024-11-19T10:28:07.373Z] =================================================================================================================== 00:33:59.021 [2024-11-19T10:28:07.373Z] Total : 17845.00 69.71 0.00 0.00 0.00 0.00 0.00 00:33:59.021 00:33:59.282 true 00:33:59.282 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:33:59.282 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:59.282 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:59.282 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:59.282 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 191260 00:34:00.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:00.224 Nvme0n1 : 3.00 17908.00 69.95 0.00 0.00 0.00 0.00 0.00 00:34:00.224 [2024-11-19T10:28:08.576Z] =================================================================================================================== 00:34:00.224 [2024-11-19T10:28:08.576Z] Total : 17908.00 69.95 0.00 0.00 0.00 0.00 0.00 00:34:00.224 00:34:01.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:01.166 Nvme0n1 : 4.00 17939.50 70.08 0.00 0.00 0.00 0.00 0.00 00:34:01.166 [2024-11-19T10:28:09.518Z] =================================================================================================================== 00:34:01.166 [2024-11-19T10:28:09.518Z] Total : 17939.50 70.08 0.00 0.00 0.00 0.00 0.00 00:34:01.166 00:34:02.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:02.108 Nvme0n1 : 5.00 17958.40 70.15 0.00 0.00 0.00 0.00 0.00 00:34:02.108 [2024-11-19T10:28:10.460Z] =================================================================================================================== 00:34:02.108 [2024-11-19T10:28:10.460Z] Total : 17958.40 70.15 0.00 0.00 0.00 0.00 0.00 00:34:02.108 00:34:03.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:03.051 Nvme0n1 : 6.00 17971.00 70.20 0.00 0.00 0.00 0.00 0.00 00:34:03.051 [2024-11-19T10:28:11.403Z] =================================================================================================================== 00:34:03.051 [2024-11-19T10:28:11.403Z] Total : 17971.00 70.20 0.00 0.00 0.00 0.00 0.00 00:34:03.051 00:34:04.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.435 Nvme0n1 : 7.00 17998.14 70.31 0.00 0.00 0.00 0.00 0.00 00:34:04.435 [2024-11-19T10:28:12.787Z] =================================================================================================================== 00:34:04.435 [2024-11-19T10:28:12.787Z] Total : 17998.14 70.31 0.00 0.00 0.00 0.00 0.00 00:34:04.435 00:34:05.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:05.377 Nvme0n1 : 8.00 18004.75 70.33 0.00 0.00 0.00 0.00 0.00 00:34:05.377 [2024-11-19T10:28:13.729Z] =================================================================================================================== 00:34:05.377 [2024-11-19T10:28:13.729Z] Total : 18004.75 70.33 0.00 0.00 0.00 0.00 0.00 00:34:05.377 00:34:06.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:06.395 Nvme0n1 : 9.00 18022.11 70.40 0.00 0.00 0.00 0.00 0.00 00:34:06.395 [2024-11-19T10:28:14.747Z] =================================================================================================================== 00:34:06.395 [2024-11-19T10:28:14.747Z] Total : 18022.11 70.40 0.00 0.00 0.00 0.00 0.00 00:34:06.395 00:34:07.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:07.340 Nvme0n1 : 10.00 18036.00 70.45 0.00 0.00 0.00 0.00 0.00 00:34:07.340 [2024-11-19T10:28:15.692Z] =================================================================================================================== 00:34:07.340 [2024-11-19T10:28:15.692Z] Total : 18036.00 70.45 0.00 0.00 0.00 0.00 0.00 00:34:07.340 00:34:07.340 00:34:07.340 Latency(us) 00:34:07.340 [2024-11-19T10:28:15.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:07.340 Nvme0n1 : 10.00 18033.81 70.44 0.00 0.00 7094.50 2266.45 13653.33 00:34:07.340 [2024-11-19T10:28:15.692Z] =================================================================================================================== 00:34:07.340 [2024-11-19T10:28:15.692Z] Total : 18033.81 70.44 0.00 0.00 7094.50 2266.45 13653.33 00:34:07.340 { 00:34:07.340 "results": [ 00:34:07.340 { 00:34:07.340 "job": "Nvme0n1", 00:34:07.340 "core_mask": "0x2", 00:34:07.340 "workload": "randwrite", 00:34:07.340 "status": "finished", 00:34:07.340 "queue_depth": 128, 00:34:07.340 "io_size": 4096, 00:34:07.340 "runtime": 10.004821, 00:34:07.340 "iops": 18033.80590217456, 00:34:07.340 "mibps": 70.44455430536938, 00:34:07.340 "io_failed": 0, 00:34:07.340 "io_timeout": 0, 00:34:07.340 "avg_latency_us": 7094.502159974136, 00:34:07.341 "min_latency_us": 2266.4533333333334, 00:34:07.341 "max_latency_us": 13653.333333333334 00:34:07.341 } 00:34:07.341 ], 00:34:07.341 "core_count": 1 00:34:07.341 } 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 191097 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 191097 ']' 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 191097 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 191097 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 191097' 00:34:07.341 killing process with pid 191097 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 191097 00:34:07.341 Received shutdown signal, test time was about 10.000000 seconds 00:34:07.341 00:34:07.341 Latency(us) 00:34:07.341 [2024-11-19T10:28:15.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.341 [2024-11-19T10:28:15.693Z] =================================================================================================================== 00:34:07.341 [2024-11-19T10:28:15.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 191097 00:34:07.341 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:07.601 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:07.601 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:34:07.601 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:07.862 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:07.862 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:07.862 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:08.123 [2024-11-19 11:28:16.232773] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:34:08.123 request: 00:34:08.123 { 00:34:08.123 "uuid": "ae27559e-f307-4c8a-b272-3763cf43eceb", 00:34:08.123 "method": "bdev_lvol_get_lvstores", 00:34:08.123 "req_id": 1 00:34:08.123 } 00:34:08.123 Got JSON-RPC error response 00:34:08.123 response: 00:34:08.123 { 00:34:08.123 "code": -19, 00:34:08.123 "message": "No such device" 00:34:08.123 } 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:08.123 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:08.385 aio_bdev 00:34:08.385 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fe14ca84-fb30-47f8-bd3b-1bd5e59b8126 00:34:08.385 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=fe14ca84-fb30-47f8-bd3b-1bd5e59b8126 00:34:08.385 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:08.385 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:08.385 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:08.385 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:08.385 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:08.647 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fe14ca84-fb30-47f8-bd3b-1bd5e59b8126 -t 2000 00:34:08.647 [ 00:34:08.647 { 00:34:08.647 "name": "fe14ca84-fb30-47f8-bd3b-1bd5e59b8126", 00:34:08.647 "aliases": [ 00:34:08.647 "lvs/lvol" 00:34:08.647 ], 00:34:08.647 "product_name": "Logical Volume", 00:34:08.647 "block_size": 4096, 00:34:08.647 "num_blocks": 38912, 00:34:08.647 "uuid": "fe14ca84-fb30-47f8-bd3b-1bd5e59b8126", 00:34:08.647 "assigned_rate_limits": { 00:34:08.647 "rw_ios_per_sec": 0, 00:34:08.647 "rw_mbytes_per_sec": 0, 00:34:08.647 "r_mbytes_per_sec": 0, 00:34:08.647 "w_mbytes_per_sec": 0 00:34:08.647 }, 00:34:08.647 "claimed": false, 00:34:08.647 "zoned": false, 00:34:08.647 "supported_io_types": { 00:34:08.647 "read": true, 00:34:08.647 "write": true, 00:34:08.647 "unmap": true, 00:34:08.647 "flush": false, 00:34:08.647 "reset": true, 00:34:08.647 "nvme_admin": false, 00:34:08.647 "nvme_io": false, 00:34:08.647 "nvme_io_md": false, 00:34:08.647 "write_zeroes": true, 00:34:08.647 "zcopy": false, 00:34:08.647 "get_zone_info": false, 00:34:08.647 "zone_management": false, 00:34:08.647 "zone_append": false, 00:34:08.647 "compare": false, 00:34:08.647 "compare_and_write": false, 00:34:08.647 "abort": false, 00:34:08.647 "seek_hole": true, 00:34:08.647 "seek_data": true, 00:34:08.647 "copy": false, 00:34:08.647 "nvme_iov_md": false 00:34:08.647 }, 00:34:08.647 "driver_specific": { 00:34:08.647 "lvol": { 00:34:08.647 "lvol_store_uuid": "ae27559e-f307-4c8a-b272-3763cf43eceb", 00:34:08.647 "base_bdev": "aio_bdev", 00:34:08.647 "thin_provision": false, 00:34:08.647 "num_allocated_clusters": 38, 00:34:08.647 "snapshot": false, 00:34:08.647 "clone": false, 00:34:08.647 "esnap_clone": false 00:34:08.647 } 00:34:08.647 } 00:34:08.647 } 00:34:08.647 ] 00:34:08.647 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:08.647 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:34:08.647 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:08.908 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:08.908 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:34:08.908 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:09.169 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:09.169 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fe14ca84-fb30-47f8-bd3b-1bd5e59b8126 00:34:09.169 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae27559e-f307-4c8a-b272-3763cf43eceb 00:34:09.431 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:09.693 00:34:09.693 real 0m15.771s 00:34:09.693 user 0m15.412s 00:34:09.693 sys 0m1.430s 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:09.693 ************************************ 00:34:09.693 END TEST lvs_grow_clean 00:34:09.693 ************************************ 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:09.693 ************************************ 00:34:09.693 START TEST lvs_grow_dirty 00:34:09.693 ************************************ 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:09.693 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:09.954 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:09.954 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:10.215 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:10.215 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:10.215 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:10.215 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:10.215 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:10.215 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede lvol 150 00:34:10.476 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3761425e-827e-4ea2-b7ef-a9a2f9d7f33f 00:34:10.476 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:10.476 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:10.737 [2024-11-19 11:28:18.848700] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:10.737 [2024-11-19 11:28:18.848845] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:10.737 true 00:34:10.737 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:10.737 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:10.737 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:10.737 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:10.999 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3761425e-827e-4ea2-b7ef-a9a2f9d7f33f 00:34:11.260 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:11.260 [2024-11-19 11:28:19.573043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.260 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=194125 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 194125 /var/tmp/bdevperf.sock 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 194125 ']' 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:11.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.522 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:11.522 [2024-11-19 11:28:19.831207] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:11.522 [2024-11-19 11:28:19.831288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid194125 ] 00:34:11.784 [2024-11-19 11:28:19.926727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.784 [2024-11-19 11:28:19.966290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.357 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.357 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:12.357 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:12.618 Nvme0n1 00:34:12.618 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:12.880 [ 00:34:12.880 { 00:34:12.880 "name": "Nvme0n1", 00:34:12.880 "aliases": [ 00:34:12.880 "3761425e-827e-4ea2-b7ef-a9a2f9d7f33f" 00:34:12.880 ], 00:34:12.880 "product_name": "NVMe disk", 00:34:12.880 "block_size": 4096, 00:34:12.880 "num_blocks": 38912, 00:34:12.880 "uuid": "3761425e-827e-4ea2-b7ef-a9a2f9d7f33f", 00:34:12.880 "numa_id": 0, 00:34:12.880 "assigned_rate_limits": { 00:34:12.880 "rw_ios_per_sec": 0, 00:34:12.880 "rw_mbytes_per_sec": 0, 00:34:12.880 "r_mbytes_per_sec": 0, 00:34:12.880 "w_mbytes_per_sec": 0 00:34:12.880 }, 00:34:12.880 "claimed": false, 00:34:12.880 "zoned": false, 00:34:12.880 "supported_io_types": { 00:34:12.880 "read": true, 00:34:12.880 "write": true, 00:34:12.880 "unmap": true, 00:34:12.880 "flush": true, 00:34:12.880 "reset": true, 00:34:12.880 "nvme_admin": true, 00:34:12.880 "nvme_io": true, 00:34:12.880 "nvme_io_md": false, 00:34:12.880 "write_zeroes": true, 00:34:12.880 "zcopy": false, 00:34:12.880 "get_zone_info": false, 00:34:12.880 "zone_management": false, 00:34:12.880 "zone_append": false, 00:34:12.880 "compare": true, 00:34:12.880 "compare_and_write": true, 00:34:12.880 "abort": true, 00:34:12.880 "seek_hole": false, 00:34:12.880 "seek_data": false, 00:34:12.880 "copy": true, 00:34:12.880 "nvme_iov_md": false 00:34:12.880 }, 00:34:12.880 "memory_domains": [ 00:34:12.880 { 00:34:12.880 "dma_device_id": "system", 00:34:12.880 "dma_device_type": 1 00:34:12.880 } 00:34:12.880 ], 00:34:12.880 "driver_specific": { 00:34:12.880 "nvme": [ 00:34:12.880 { 00:34:12.880 "trid": { 00:34:12.880 "trtype": "TCP", 00:34:12.880 "adrfam": "IPv4", 00:34:12.880 "traddr": "10.0.0.2", 00:34:12.880 "trsvcid": "4420", 00:34:12.880 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:12.880 }, 00:34:12.880 "ctrlr_data": { 00:34:12.880 "cntlid": 1, 00:34:12.880 "vendor_id": "0x8086", 00:34:12.880 "model_number": "SPDK bdev Controller", 00:34:12.880 "serial_number": "SPDK0", 00:34:12.880 "firmware_revision": "25.01", 00:34:12.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.880 "oacs": { 00:34:12.880 "security": 0, 00:34:12.880 "format": 0, 00:34:12.880 "firmware": 0, 00:34:12.880 "ns_manage": 0 00:34:12.880 }, 00:34:12.880 "multi_ctrlr": true, 00:34:12.880 "ana_reporting": false 00:34:12.880 }, 00:34:12.880 "vs": { 00:34:12.880 "nvme_version": "1.3" 00:34:12.880 }, 00:34:12.880 "ns_data": { 00:34:12.880 "id": 1, 00:34:12.880 "can_share": true 00:34:12.880 } 00:34:12.880 } 00:34:12.880 ], 00:34:12.880 "mp_policy": "active_passive" 00:34:12.880 } 00:34:12.880 } 00:34:12.880 ] 00:34:12.880 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=194273 00:34:12.880 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:12.880 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:12.880 Running I/O for 10 seconds... 00:34:14.264 Latency(us) 00:34:14.264 [2024-11-19T10:28:22.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:14.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:14.264 Nvme0n1 : 1.00 17784.00 69.47 0.00 0.00 0.00 0.00 0.00 00:34:14.264 [2024-11-19T10:28:22.616Z] =================================================================================================================== 00:34:14.264 [2024-11-19T10:28:22.616Z] Total : 17784.00 69.47 0.00 0.00 0.00 0.00 0.00 00:34:14.264 00:34:14.835 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:15.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:15.097 Nvme0n1 : 2.00 17845.50 69.71 0.00 0.00 0.00 0.00 0.00 00:34:15.097 [2024-11-19T10:28:23.449Z] =================================================================================================================== 00:34:15.097 [2024-11-19T10:28:23.449Z] Total : 17845.50 69.71 0.00 0.00 0.00 0.00 0.00 00:34:15.097 00:34:15.097 true 00:34:15.097 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:15.097 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:15.358 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:15.358 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:15.358 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 194273 00:34:15.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:15.929 Nvme0n1 : 3.00 17908.33 69.95 0.00 0.00 0.00 0.00 0.00 00:34:15.929 [2024-11-19T10:28:24.281Z] =================================================================================================================== 00:34:15.930 [2024-11-19T10:28:24.282Z] Total : 17908.33 69.95 0.00 0.00 0.00 0.00 0.00 00:34:15.930 00:34:17.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:17.314 Nvme0n1 : 4.00 17939.75 70.08 0.00 0.00 0.00 0.00 0.00 00:34:17.314 [2024-11-19T10:28:25.666Z] =================================================================================================================== 00:34:17.314 [2024-11-19T10:28:25.666Z] Total : 17939.75 70.08 0.00 0.00 0.00 0.00 0.00 00:34:17.314 00:34:17.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:17.885 Nvme0n1 : 5.00 17958.60 70.15 0.00 0.00 0.00 0.00 0.00 00:34:17.885 [2024-11-19T10:28:26.237Z] =================================================================================================================== 00:34:17.885 [2024-11-19T10:28:26.237Z] Total : 17958.60 70.15 0.00 0.00 0.00 0.00 0.00 00:34:17.885 00:34:19.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:19.270 Nvme0n1 : 6.00 17992.33 70.28 0.00 0.00 0.00 0.00 0.00 00:34:19.270 [2024-11-19T10:28:27.622Z] =================================================================================================================== 00:34:19.270 [2024-11-19T10:28:27.622Z] Total : 17992.33 70.28 0.00 0.00 0.00 0.00 0.00 00:34:19.270 00:34:20.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:20.211 Nvme0n1 : 7.00 17998.29 70.31 0.00 0.00 0.00 0.00 0.00 00:34:20.211 [2024-11-19T10:28:28.563Z] =================================================================================================================== 00:34:20.211 [2024-11-19T10:28:28.563Z] Total : 17998.29 70.31 0.00 0.00 0.00 0.00 0.00 00:34:20.211 00:34:21.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:21.151 Nvme0n1 : 8.00 18018.62 70.39 0.00 0.00 0.00 0.00 0.00 00:34:21.151 [2024-11-19T10:28:29.503Z] =================================================================================================================== 00:34:21.151 [2024-11-19T10:28:29.503Z] Total : 18018.62 70.39 0.00 0.00 0.00 0.00 0.00 00:34:21.151 00:34:22.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.094 Nvme0n1 : 9.00 18020.33 70.39 0.00 0.00 0.00 0.00 0.00 00:34:22.094 [2024-11-19T10:28:30.446Z] =================================================================================================================== 00:34:22.094 [2024-11-19T10:28:30.446Z] Total : 18020.33 70.39 0.00 0.00 0.00 0.00 0.00 00:34:22.094 00:34:23.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.037 Nvme0n1 : 10.00 18034.40 70.45 0.00 0.00 0.00 0.00 0.00 00:34:23.037 [2024-11-19T10:28:31.389Z] =================================================================================================================== 00:34:23.037 [2024-11-19T10:28:31.389Z] Total : 18034.40 70.45 0.00 0.00 0.00 0.00 0.00 00:34:23.037 00:34:23.037 00:34:23.037 Latency(us) 00:34:23.037 [2024-11-19T10:28:31.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.037 Nvme0n1 : 10.01 18036.77 70.46 0.00 0.00 7093.94 1747.63 13216.43 00:34:23.037 [2024-11-19T10:28:31.389Z] =================================================================================================================== 00:34:23.037 [2024-11-19T10:28:31.389Z] Total : 18036.77 70.46 0.00 0.00 7093.94 1747.63 13216.43 00:34:23.037 { 00:34:23.037 "results": [ 00:34:23.037 { 00:34:23.037 "job": "Nvme0n1", 00:34:23.037 "core_mask": "0x2", 00:34:23.037 "workload": "randwrite", 00:34:23.037 "status": "finished", 00:34:23.037 "queue_depth": 128, 00:34:23.037 "io_size": 4096, 00:34:23.037 "runtime": 10.005782, 00:34:23.037 "iops": 18036.771138927474, 00:34:23.037 "mibps": 70.45613726143544, 00:34:23.037 "io_failed": 0, 00:34:23.037 "io_timeout": 0, 00:34:23.037 "avg_latency_us": 7093.940244396175, 00:34:23.037 "min_latency_us": 1747.6266666666668, 00:34:23.037 "max_latency_us": 13216.426666666666 00:34:23.037 } 00:34:23.037 ], 00:34:23.037 "core_count": 1 00:34:23.037 } 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 194125 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 194125 ']' 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 194125 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 194125 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 194125' 00:34:23.037 killing process with pid 194125 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 194125 00:34:23.037 Received shutdown signal, test time was about 10.000000 seconds 00:34:23.037 00:34:23.037 Latency(us) 00:34:23.037 [2024-11-19T10:28:31.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.037 [2024-11-19T10:28:31.389Z] =================================================================================================================== 00:34:23.037 [2024-11-19T10:28:31.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:23.037 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 194125 00:34:23.298 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:23.298 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:23.558 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:23.558 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:23.819 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:23.819 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:23.819 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 190440 00:34:23.819 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 190440 00:34:23.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 190440 Killed "${NVMF_APP[@]}" "$@" 00:34:23.819 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=196359 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 196359 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 196359 ']' 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.820 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:23.820 [2024-11-19 11:28:32.130216] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:23.820 [2024-11-19 11:28:32.131560] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:23.820 [2024-11-19 11:28:32.131613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.081 [2024-11-19 11:28:32.221194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.081 [2024-11-19 11:28:32.257187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:24.081 [2024-11-19 11:28:32.257223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:24.081 [2024-11-19 11:28:32.257231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:24.081 [2024-11-19 11:28:32.257238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:24.081 [2024-11-19 11:28:32.257244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:24.081 [2024-11-19 11:28:32.257806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.081 [2024-11-19 11:28:32.312390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:24.081 [2024-11-19 11:28:32.312654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:24.653 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.653 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:24.653 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:24.653 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.653 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:24.653 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.653 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:24.914 [2024-11-19 11:28:33.116873] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:24.914 [2024-11-19 11:28:33.117000] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:24.914 [2024-11-19 11:28:33.117034] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:24.914 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:24.914 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3761425e-827e-4ea2-b7ef-a9a2f9d7f33f 00:34:24.914 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3761425e-827e-4ea2-b7ef-a9a2f9d7f33f 00:34:24.914 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:24.914 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:24.914 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:24.914 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:24.914 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:25.176 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3761425e-827e-4ea2-b7ef-a9a2f9d7f33f -t 2000 00:34:25.176 [ 00:34:25.176 { 00:34:25.176 "name": "3761425e-827e-4ea2-b7ef-a9a2f9d7f33f", 00:34:25.176 "aliases": [ 00:34:25.176 "lvs/lvol" 00:34:25.176 ], 00:34:25.176 "product_name": "Logical Volume", 00:34:25.176 "block_size": 4096, 00:34:25.176 "num_blocks": 38912, 00:34:25.176 "uuid": "3761425e-827e-4ea2-b7ef-a9a2f9d7f33f", 00:34:25.176 "assigned_rate_limits": { 00:34:25.176 "rw_ios_per_sec": 0, 00:34:25.176 "rw_mbytes_per_sec": 0, 00:34:25.176 "r_mbytes_per_sec": 0, 00:34:25.176 "w_mbytes_per_sec": 0 00:34:25.176 }, 00:34:25.176 "claimed": false, 00:34:25.176 "zoned": false, 00:34:25.176 "supported_io_types": { 00:34:25.176 "read": true, 00:34:25.176 "write": true, 00:34:25.176 "unmap": true, 00:34:25.176 "flush": false, 00:34:25.176 "reset": true, 00:34:25.176 "nvme_admin": false, 00:34:25.176 "nvme_io": false, 00:34:25.176 "nvme_io_md": false, 00:34:25.176 "write_zeroes": true, 00:34:25.176 "zcopy": false, 00:34:25.176 "get_zone_info": false, 00:34:25.176 "zone_management": false, 00:34:25.176 "zone_append": false, 00:34:25.176 "compare": false, 00:34:25.176 "compare_and_write": false, 00:34:25.176 "abort": false, 00:34:25.176 "seek_hole": true, 00:34:25.176 "seek_data": true, 00:34:25.176 "copy": false, 00:34:25.176 "nvme_iov_md": false 00:34:25.176 }, 00:34:25.176 "driver_specific": { 00:34:25.176 "lvol": { 00:34:25.176 "lvol_store_uuid": "6aa9ac75-5b48-45bc-a985-70c614fb1ede", 00:34:25.176 "base_bdev": "aio_bdev", 00:34:25.176 "thin_provision": false, 00:34:25.176 "num_allocated_clusters": 38, 00:34:25.176 "snapshot": false, 00:34:25.176 "clone": false, 00:34:25.176 "esnap_clone": false 00:34:25.176 } 00:34:25.176 } 00:34:25.176 } 00:34:25.176 ] 00:34:25.176 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:25.176 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:25.176 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:25.438 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:25.438 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:25.438 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:25.699 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:25.699 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:25.699 [2024-11-19 11:28:34.006428] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:25.960 request: 00:34:25.960 { 00:34:25.960 "uuid": "6aa9ac75-5b48-45bc-a985-70c614fb1ede", 00:34:25.960 "method": "bdev_lvol_get_lvstores", 00:34:25.960 "req_id": 1 00:34:25.960 } 00:34:25.960 Got JSON-RPC error response 00:34:25.960 response: 00:34:25.960 { 00:34:25.960 "code": -19, 00:34:25.960 "message": "No such device" 00:34:25.960 } 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:25.960 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:26.220 aio_bdev 00:34:26.221 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3761425e-827e-4ea2-b7ef-a9a2f9d7f33f 00:34:26.221 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3761425e-827e-4ea2-b7ef-a9a2f9d7f33f 00:34:26.221 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:26.221 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:26.221 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:26.221 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:26.221 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:26.481 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3761425e-827e-4ea2-b7ef-a9a2f9d7f33f -t 2000 00:34:26.481 [ 00:34:26.481 { 00:34:26.481 "name": "3761425e-827e-4ea2-b7ef-a9a2f9d7f33f", 00:34:26.481 "aliases": [ 00:34:26.481 "lvs/lvol" 00:34:26.481 ], 00:34:26.481 "product_name": "Logical Volume", 00:34:26.481 "block_size": 4096, 00:34:26.481 "num_blocks": 38912, 00:34:26.481 "uuid": "3761425e-827e-4ea2-b7ef-a9a2f9d7f33f", 00:34:26.481 "assigned_rate_limits": { 00:34:26.481 "rw_ios_per_sec": 0, 00:34:26.481 "rw_mbytes_per_sec": 0, 00:34:26.481 "r_mbytes_per_sec": 0, 00:34:26.481 "w_mbytes_per_sec": 0 00:34:26.481 }, 00:34:26.481 "claimed": false, 00:34:26.481 "zoned": false, 00:34:26.481 "supported_io_types": { 00:34:26.481 "read": true, 00:34:26.481 "write": true, 00:34:26.481 "unmap": true, 00:34:26.481 "flush": false, 00:34:26.481 "reset": true, 00:34:26.481 "nvme_admin": false, 00:34:26.481 "nvme_io": false, 00:34:26.481 "nvme_io_md": false, 00:34:26.481 "write_zeroes": true, 00:34:26.481 "zcopy": false, 00:34:26.481 "get_zone_info": false, 00:34:26.481 "zone_management": false, 00:34:26.481 "zone_append": false, 00:34:26.481 "compare": false, 00:34:26.481 "compare_and_write": false, 00:34:26.481 "abort": false, 00:34:26.481 "seek_hole": true, 00:34:26.481 "seek_data": true, 00:34:26.481 "copy": false, 00:34:26.481 "nvme_iov_md": false 00:34:26.481 }, 00:34:26.481 "driver_specific": { 00:34:26.481 "lvol": { 00:34:26.481 "lvol_store_uuid": "6aa9ac75-5b48-45bc-a985-70c614fb1ede", 00:34:26.481 "base_bdev": "aio_bdev", 00:34:26.481 "thin_provision": false, 00:34:26.481 "num_allocated_clusters": 38, 00:34:26.481 "snapshot": false, 00:34:26.481 "clone": false, 00:34:26.481 "esnap_clone": false 00:34:26.481 } 00:34:26.481 } 00:34:26.481 } 00:34:26.481 ] 00:34:26.481 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:26.481 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:26.481 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:26.742 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:26.742 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:26.742 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:27.002 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:27.002 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3761425e-827e-4ea2-b7ef-a9a2f9d7f33f 00:34:27.002 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6aa9ac75-5b48-45bc-a985-70c614fb1ede 00:34:27.263 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:27.524 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:27.524 00:34:27.524 real 0m17.696s 00:34:27.524 user 0m35.597s 00:34:27.524 sys 0m2.932s 00:34:27.524 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.524 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:27.524 ************************************ 00:34:27.524 END TEST lvs_grow_dirty 00:34:27.524 ************************************ 00:34:27.524 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:27.524 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:27.524 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:27.524 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:27.525 nvmf_trace.0 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.525 rmmod nvme_tcp 00:34:27.525 rmmod nvme_fabrics 00:34:27.525 rmmod nvme_keyring 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 196359 ']' 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 196359 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 196359 ']' 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 196359 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:27.525 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 196359 00:34:27.785 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:27.785 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:27.785 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 196359' 00:34:27.785 killing process with pid 196359 00:34:27.785 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 196359 00:34:27.785 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 196359 00:34:27.785 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.786 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.334 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.335 00:34:30.335 real 0m45.798s 00:34:30.335 user 0m54.227s 00:34:30.335 sys 0m11.176s 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:30.335 ************************************ 00:34:30.335 END TEST nvmf_lvs_grow 00:34:30.335 ************************************ 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:30.335 ************************************ 00:34:30.335 START TEST nvmf_bdev_io_wait 00:34:30.335 ************************************ 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:30.335 * Looking for test storage... 00:34:30.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:30.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.335 --rc genhtml_branch_coverage=1 00:34:30.335 --rc genhtml_function_coverage=1 00:34:30.335 --rc genhtml_legend=1 00:34:30.335 --rc geninfo_all_blocks=1 00:34:30.335 --rc geninfo_unexecuted_blocks=1 00:34:30.335 00:34:30.335 ' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:30.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.335 --rc genhtml_branch_coverage=1 00:34:30.335 --rc genhtml_function_coverage=1 00:34:30.335 --rc genhtml_legend=1 00:34:30.335 --rc geninfo_all_blocks=1 00:34:30.335 --rc geninfo_unexecuted_blocks=1 00:34:30.335 00:34:30.335 ' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:30.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.335 --rc genhtml_branch_coverage=1 00:34:30.335 --rc genhtml_function_coverage=1 00:34:30.335 --rc genhtml_legend=1 00:34:30.335 --rc geninfo_all_blocks=1 00:34:30.335 --rc geninfo_unexecuted_blocks=1 00:34:30.335 00:34:30.335 ' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:30.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.335 --rc genhtml_branch_coverage=1 00:34:30.335 --rc genhtml_function_coverage=1 00:34:30.335 --rc genhtml_legend=1 00:34:30.335 --rc geninfo_all_blocks=1 00:34:30.335 --rc geninfo_unexecuted_blocks=1 00:34:30.335 00:34:30.335 ' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.335 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.336 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:38.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:38.478 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:38.478 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:38.479 Found net devices under 0000:31:00.0: cvl_0_0 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:38.479 Found net devices under 0000:31:00.1: cvl_0_1 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:38.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:34:38.479 00:34:38.479 --- 10.0.0.2 ping statistics --- 00:34:38.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.479 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:38.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:34:38.479 00:34:38.479 --- 10.0.0.1 ping statistics --- 00:34:38.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.479 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:34:38.479 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=201836 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 201836 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 201836 ']' 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.740 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:38.740 [2024-11-19 11:28:46.927200] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:38.740 [2024-11-19 11:28:46.928346] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:38.740 [2024-11-19 11:28:46.928398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.740 [2024-11-19 11:28:47.020755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:38.740 [2024-11-19 11:28:47.063717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.740 [2024-11-19 11:28:47.063756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.740 [2024-11-19 11:28:47.063764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.740 [2024-11-19 11:28:47.063771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.740 [2024-11-19 11:28:47.063777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.740 [2024-11-19 11:28:47.065368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.740 [2024-11-19 11:28:47.065482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:38.740 [2024-11-19 11:28:47.065679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.740 [2024-11-19 11:28:47.065680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:38.740 [2024-11-19 11:28:47.065957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:39.689 [2024-11-19 11:28:47.795947] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:39.689 [2024-11-19 11:28:47.796402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:39.689 [2024-11-19 11:28:47.797066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:39.689 [2024-11-19 11:28:47.797202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:39.689 [2024-11-19 11:28:47.806127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:39.689 Malloc0 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:39.689 [2024-11-19 11:28:47.866321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=202017 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=202019 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:39.689 { 00:34:39.689 "params": { 00:34:39.689 "name": "Nvme$subsystem", 00:34:39.689 "trtype": "$TEST_TRANSPORT", 00:34:39.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.689 "adrfam": "ipv4", 00:34:39.689 "trsvcid": "$NVMF_PORT", 00:34:39.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.689 "hdgst": ${hdgst:-false}, 00:34:39.689 "ddgst": ${ddgst:-false} 00:34:39.689 }, 00:34:39.689 "method": "bdev_nvme_attach_controller" 00:34:39.689 } 00:34:39.689 EOF 00:34:39.689 )") 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=202021 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:39.689 { 00:34:39.689 "params": { 00:34:39.689 "name": "Nvme$subsystem", 00:34:39.689 "trtype": "$TEST_TRANSPORT", 00:34:39.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.689 "adrfam": "ipv4", 00:34:39.689 "trsvcid": "$NVMF_PORT", 00:34:39.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.689 "hdgst": ${hdgst:-false}, 00:34:39.689 "ddgst": ${ddgst:-false} 00:34:39.689 }, 00:34:39.689 "method": "bdev_nvme_attach_controller" 00:34:39.689 } 00:34:39.689 EOF 00:34:39.689 )") 00:34:39.689 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=202024 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:39.690 { 00:34:39.690 "params": { 00:34:39.690 "name": "Nvme$subsystem", 00:34:39.690 "trtype": "$TEST_TRANSPORT", 00:34:39.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.690 "adrfam": "ipv4", 00:34:39.690 "trsvcid": "$NVMF_PORT", 00:34:39.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.690 "hdgst": ${hdgst:-false}, 00:34:39.690 "ddgst": ${ddgst:-false} 00:34:39.690 }, 00:34:39.690 "method": "bdev_nvme_attach_controller" 00:34:39.690 } 00:34:39.690 EOF 00:34:39.690 )") 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:39.690 { 00:34:39.690 "params": { 00:34:39.690 "name": "Nvme$subsystem", 00:34:39.690 "trtype": "$TEST_TRANSPORT", 00:34:39.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.690 "adrfam": "ipv4", 00:34:39.690 "trsvcid": "$NVMF_PORT", 00:34:39.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.690 "hdgst": ${hdgst:-false}, 00:34:39.690 "ddgst": ${ddgst:-false} 00:34:39.690 }, 00:34:39.690 "method": "bdev_nvme_attach_controller" 00:34:39.690 } 00:34:39.690 EOF 00:34:39.690 )") 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 202017 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:39.690 "params": { 00:34:39.690 "name": "Nvme1", 00:34:39.690 "trtype": "tcp", 00:34:39.690 "traddr": "10.0.0.2", 00:34:39.690 "adrfam": "ipv4", 00:34:39.690 "trsvcid": "4420", 00:34:39.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:39.690 "hdgst": false, 00:34:39.690 "ddgst": false 00:34:39.690 }, 00:34:39.690 "method": "bdev_nvme_attach_controller" 00:34:39.690 }' 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:39.690 "params": { 00:34:39.690 "name": "Nvme1", 00:34:39.690 "trtype": "tcp", 00:34:39.690 "traddr": "10.0.0.2", 00:34:39.690 "adrfam": "ipv4", 00:34:39.690 "trsvcid": "4420", 00:34:39.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:39.690 "hdgst": false, 00:34:39.690 "ddgst": false 00:34:39.690 }, 00:34:39.690 "method": "bdev_nvme_attach_controller" 00:34:39.690 }' 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:39.690 "params": { 00:34:39.690 "name": "Nvme1", 00:34:39.690 "trtype": "tcp", 00:34:39.690 "traddr": "10.0.0.2", 00:34:39.690 "adrfam": "ipv4", 00:34:39.690 "trsvcid": "4420", 00:34:39.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:39.690 "hdgst": false, 00:34:39.690 "ddgst": false 00:34:39.690 }, 00:34:39.690 "method": "bdev_nvme_attach_controller" 00:34:39.690 }' 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:39.690 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:39.690 "params": { 00:34:39.690 "name": "Nvme1", 00:34:39.690 "trtype": "tcp", 00:34:39.690 "traddr": "10.0.0.2", 00:34:39.690 "adrfam": "ipv4", 00:34:39.690 "trsvcid": "4420", 00:34:39.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:39.690 "hdgst": false, 00:34:39.690 "ddgst": false 00:34:39.690 }, 00:34:39.690 "method": "bdev_nvme_attach_controller" 00:34:39.690 }' 00:34:39.690 [2024-11-19 11:28:47.922900] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:39.690 [2024-11-19 11:28:47.922954] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:39.690 [2024-11-19 11:28:47.925732] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:39.690 [2024-11-19 11:28:47.925780] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:39.690 [2024-11-19 11:28:47.925944] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:39.690 [2024-11-19 11:28:47.925988] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:39.690 [2024-11-19 11:28:47.926641] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:39.690 [2024-11-19 11:28:47.926688] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:39.950 [2024-11-19 11:28:48.089431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.950 [2024-11-19 11:28:48.114225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.950 [2024-11-19 11:28:48.119537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:39.951 [2024-11-19 11:28:48.143150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:39.951 [2024-11-19 11:28:48.165353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.951 [2024-11-19 11:28:48.194027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:39.951 [2024-11-19 11:28:48.213337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.951 [2024-11-19 11:28:48.241078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:39.951 Running I/O for 1 seconds... 00:34:39.951 Running I/O for 1 seconds... 00:34:40.211 Running I/O for 1 seconds... 00:34:40.211 Running I/O for 1 seconds... 00:34:41.153 12785.00 IOPS, 49.94 MiB/s 00:34:41.153 Latency(us) 00:34:41.153 [2024-11-19T10:28:49.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.153 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:41.153 Nvme1n1 : 1.01 12838.97 50.15 0.00 0.00 9936.98 2143.57 11960.32 00:34:41.153 [2024-11-19T10:28:49.505Z] =================================================================================================================== 00:34:41.153 [2024-11-19T10:28:49.505Z] Total : 12838.97 50.15 0.00 0.00 9936.98 2143.57 11960.32 00:34:41.153 13101.00 IOPS, 51.18 MiB/s 00:34:41.153 Latency(us) 00:34:41.153 [2024-11-19T10:28:49.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.153 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:41.153 Nvme1n1 : 1.01 13176.52 51.47 0.00 0.00 9686.64 2457.60 13817.17 00:34:41.153 [2024-11-19T10:28:49.505Z] =================================================================================================================== 00:34:41.153 [2024-11-19T10:28:49.505Z] Total : 13176.52 51.47 0.00 0.00 9686.64 2457.60 13817.17 00:34:41.153 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 202019 00:34:41.153 17585.00 IOPS, 68.69 MiB/s 00:34:41.153 Latency(us) 00:34:41.153 [2024-11-19T10:28:49.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.153 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:41.153 Nvme1n1 : 1.00 17634.33 68.88 0.00 0.00 7243.87 2607.79 11851.09 00:34:41.153 [2024-11-19T10:28:49.505Z] =================================================================================================================== 00:34:41.153 [2024-11-19T10:28:49.505Z] Total : 17634.33 68.88 0.00 0.00 7243.87 2607.79 11851.09 00:34:41.153 176960.00 IOPS, 691.25 MiB/s 00:34:41.153 Latency(us) 00:34:41.153 [2024-11-19T10:28:49.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.153 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:41.153 Nvme1n1 : 1.00 176607.19 689.87 0.00 0.00 720.75 300.37 1966.08 00:34:41.153 [2024-11-19T10:28:49.505Z] =================================================================================================================== 00:34:41.153 [2024-11-19T10:28:49.505Z] Total : 176607.19 689.87 0.00 0.00 720.75 300.37 1966.08 00:34:41.153 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 202021 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 202024 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.413 rmmod nvme_tcp 00:34:41.413 rmmod nvme_fabrics 00:34:41.413 rmmod nvme_keyring 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 201836 ']' 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 201836 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 201836 ']' 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 201836 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 201836 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 201836' 00:34:41.413 killing process with pid 201836 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 201836 00:34:41.413 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 201836 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.673 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.588 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.588 00:34:43.588 real 0m13.668s 00:34:43.588 user 0m14.820s 00:34:43.588 sys 0m8.036s 00:34:43.588 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.588 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.588 ************************************ 00:34:43.588 END TEST nvmf_bdev_io_wait 00:34:43.588 ************************************ 00:34:43.850 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:43.850 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:43.850 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:43.850 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:43.850 ************************************ 00:34:43.850 START TEST nvmf_queue_depth 00:34:43.850 ************************************ 00:34:43.850 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:43.850 * Looking for test storage... 00:34:43.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:43.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.850 --rc genhtml_branch_coverage=1 00:34:43.850 --rc genhtml_function_coverage=1 00:34:43.850 --rc genhtml_legend=1 00:34:43.850 --rc geninfo_all_blocks=1 00:34:43.850 --rc geninfo_unexecuted_blocks=1 00:34:43.850 00:34:43.850 ' 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:43.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.850 --rc genhtml_branch_coverage=1 00:34:43.850 --rc genhtml_function_coverage=1 00:34:43.850 --rc genhtml_legend=1 00:34:43.850 --rc geninfo_all_blocks=1 00:34:43.850 --rc geninfo_unexecuted_blocks=1 00:34:43.850 00:34:43.850 ' 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:43.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.850 --rc genhtml_branch_coverage=1 00:34:43.850 --rc genhtml_function_coverage=1 00:34:43.850 --rc genhtml_legend=1 00:34:43.850 --rc geninfo_all_blocks=1 00:34:43.850 --rc geninfo_unexecuted_blocks=1 00:34:43.850 00:34:43.850 ' 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:43.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.850 --rc genhtml_branch_coverage=1 00:34:43.850 --rc genhtml_function_coverage=1 00:34:43.850 --rc genhtml_legend=1 00:34:43.850 --rc geninfo_all_blocks=1 00:34:43.850 --rc geninfo_unexecuted_blocks=1 00:34:43.850 00:34:43.850 ' 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.850 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.112 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:44.113 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:52.258 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:52.258 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:52.258 Found net devices under 0000:31:00.0: cvl_0_0 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:52.258 Found net devices under 0000:31:00.1: cvl_0_1 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.258 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.259 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:52.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:34:52.520 00:34:52.520 --- 10.0.0.2 ping statistics --- 00:34:52.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.520 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:34:52.520 00:34:52.520 --- 10.0.0.1 ping statistics --- 00:34:52.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.520 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=207055 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 207055 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 207055 ']' 00:34:52.520 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.521 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:52.521 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.521 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:52.521 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:52.521 [2024-11-19 11:29:00.728007] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:52.521 [2024-11-19 11:29:00.729034] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:52.521 [2024-11-19 11:29:00.729075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.521 [2024-11-19 11:29:00.839134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.781 [2024-11-19 11:29:00.888991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.781 [2024-11-19 11:29:00.889042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.781 [2024-11-19 11:29:00.889050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.781 [2024-11-19 11:29:00.889058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.781 [2024-11-19 11:29:00.889064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.781 [2024-11-19 11:29:00.889786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.781 [2024-11-19 11:29:00.958056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:52.781 [2024-11-19 11:29:00.958319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:53.354 [2024-11-19 11:29:01.586649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:53.354 Malloc0 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:53.354 [2024-11-19 11:29:01.674795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=207297 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 207297 /var/tmp/bdevperf.sock 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 207297 ']' 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:53.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.354 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:53.614 [2024-11-19 11:29:01.733916] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:34:53.614 [2024-11-19 11:29:01.733980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid207297 ] 00:34:53.614 [2024-11-19 11:29:01.817121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.614 [2024-11-19 11:29:01.859074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.185 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.185 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:54.185 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:54.185 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.185 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:54.446 NVMe0n1 00:34:54.446 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.446 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:54.446 Running I/O for 10 seconds... 00:34:56.772 8192.00 IOPS, 32.00 MiB/s [2024-11-19T10:29:05.695Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-19T10:29:07.120Z] 8867.67 IOPS, 34.64 MiB/s [2024-11-19T10:29:07.756Z] 9422.75 IOPS, 36.81 MiB/s [2024-11-19T10:29:09.139Z] 9942.60 IOPS, 38.84 MiB/s [2024-11-19T10:29:09.706Z] 10300.50 IOPS, 40.24 MiB/s [2024-11-19T10:29:11.090Z] 10589.43 IOPS, 41.36 MiB/s [2024-11-19T10:29:12.033Z] 10791.88 IOPS, 42.16 MiB/s [2024-11-19T10:29:12.976Z] 10923.89 IOPS, 42.67 MiB/s [2024-11-19T10:29:12.976Z] 11061.00 IOPS, 43.21 MiB/s 00:35:04.624 Latency(us) 00:35:04.624 [2024-11-19T10:29:12.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.624 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:04.624 Verification LBA range: start 0x0 length 0x4000 00:35:04.624 NVMe0n1 : 10.06 11093.47 43.33 0.00 0.00 91980.84 22937.60 85196.80 00:35:04.624 [2024-11-19T10:29:12.976Z] =================================================================================================================== 00:35:04.624 [2024-11-19T10:29:12.976Z] Total : 11093.47 43.33 0.00 0.00 91980.84 22937.60 85196.80 00:35:04.624 { 00:35:04.624 "results": [ 00:35:04.624 { 00:35:04.624 "job": "NVMe0n1", 00:35:04.624 "core_mask": "0x1", 00:35:04.624 "workload": "verify", 00:35:04.624 "status": "finished", 00:35:04.624 "verify_range": { 00:35:04.624 "start": 0, 00:35:04.624 "length": 16384 00:35:04.624 }, 00:35:04.624 "queue_depth": 1024, 00:35:04.624 "io_size": 4096, 00:35:04.624 "runtime": 10.059976, 00:35:04.624 "iops": 11093.465829341938, 00:35:04.624 "mibps": 43.333850895866945, 00:35:04.624 "io_failed": 0, 00:35:04.624 "io_timeout": 0, 00:35:04.624 "avg_latency_us": 91980.842330227, 00:35:04.624 "min_latency_us": 22937.6, 00:35:04.624 "max_latency_us": 85196.8 00:35:04.624 } 00:35:04.624 ], 00:35:04.624 "core_count": 1 00:35:04.624 } 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 207297 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 207297 ']' 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 207297 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207297 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207297' 00:35:04.624 killing process with pid 207297 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 207297 00:35:04.624 Received shutdown signal, test time was about 10.000000 seconds 00:35:04.624 00:35:04.624 Latency(us) 00:35:04.624 [2024-11-19T10:29:12.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.624 [2024-11-19T10:29:12.976Z] =================================================================================================================== 00:35:04.624 [2024-11-19T10:29:12.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.624 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 207297 00:35:04.886 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:04.886 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:04.886 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:04.886 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:04.886 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:04.886 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:04.886 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.886 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:04.886 rmmod nvme_tcp 00:35:04.886 rmmod nvme_fabrics 00:35:04.886 rmmod nvme_keyring 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 207055 ']' 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 207055 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 207055 ']' 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 207055 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207055 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207055' 00:35:04.886 killing process with pid 207055 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 207055 00:35:04.886 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 207055 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.147 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.061 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:07.061 00:35:07.061 real 0m23.337s 00:35:07.061 user 0m24.788s 00:35:07.061 sys 0m8.080s 00:35:07.061 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.061 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:07.061 ************************************ 00:35:07.061 END TEST nvmf_queue_depth 00:35:07.061 ************************************ 00:35:07.061 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:07.061 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:07.061 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.061 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:07.061 ************************************ 00:35:07.061 START TEST nvmf_target_multipath 00:35:07.061 ************************************ 00:35:07.061 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:07.324 * Looking for test storage... 00:35:07.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:07.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.324 --rc genhtml_branch_coverage=1 00:35:07.324 --rc genhtml_function_coverage=1 00:35:07.324 --rc genhtml_legend=1 00:35:07.324 --rc geninfo_all_blocks=1 00:35:07.324 --rc geninfo_unexecuted_blocks=1 00:35:07.324 00:35:07.324 ' 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:07.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.324 --rc genhtml_branch_coverage=1 00:35:07.324 --rc genhtml_function_coverage=1 00:35:07.324 --rc genhtml_legend=1 00:35:07.324 --rc geninfo_all_blocks=1 00:35:07.324 --rc geninfo_unexecuted_blocks=1 00:35:07.324 00:35:07.324 ' 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:07.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.324 --rc genhtml_branch_coverage=1 00:35:07.324 --rc genhtml_function_coverage=1 00:35:07.324 --rc genhtml_legend=1 00:35:07.324 --rc geninfo_all_blocks=1 00:35:07.324 --rc geninfo_unexecuted_blocks=1 00:35:07.324 00:35:07.324 ' 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:07.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.324 --rc genhtml_branch_coverage=1 00:35:07.324 --rc genhtml_function_coverage=1 00:35:07.324 --rc genhtml_legend=1 00:35:07.324 --rc geninfo_all_blocks=1 00:35:07.324 --rc geninfo_unexecuted_blocks=1 00:35:07.324 00:35:07.324 ' 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.324 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.325 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:15.466 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.466 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:15.467 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:15.467 Found net devices under 0000:31:00.0: cvl_0_0 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:15.467 Found net devices under 0000:31:00.1: cvl_0_1 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:15.467 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:15.727 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:15.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:35:15.727 00:35:15.727 --- 10.0.0.2 ping statistics --- 00:35:15.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.727 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:35:15.727 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:15.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:35:15.727 00:35:15.727 --- 10.0.0.1 ping statistics --- 00:35:15.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.728 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:15.728 only one NIC for nvmf test 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:15.728 rmmod nvme_tcp 00:35:15.728 rmmod nvme_fabrics 00:35:15.728 rmmod nvme_keyring 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.728 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.270 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:18.270 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:18.270 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:18.270 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.270 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:18.270 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.270 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:18.271 00:35:18.271 real 0m10.655s 00:35:18.271 user 0m2.346s 00:35:18.271 sys 0m6.256s 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:18.271 ************************************ 00:35:18.271 END TEST nvmf_target_multipath 00:35:18.271 ************************************ 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:18.271 ************************************ 00:35:18.271 START TEST nvmf_zcopy 00:35:18.271 ************************************ 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:18.271 * Looking for test storage... 00:35:18.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:18.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.271 --rc genhtml_branch_coverage=1 00:35:18.271 --rc genhtml_function_coverage=1 00:35:18.271 --rc genhtml_legend=1 00:35:18.271 --rc geninfo_all_blocks=1 00:35:18.271 --rc geninfo_unexecuted_blocks=1 00:35:18.271 00:35:18.271 ' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:18.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.271 --rc genhtml_branch_coverage=1 00:35:18.271 --rc genhtml_function_coverage=1 00:35:18.271 --rc genhtml_legend=1 00:35:18.271 --rc geninfo_all_blocks=1 00:35:18.271 --rc geninfo_unexecuted_blocks=1 00:35:18.271 00:35:18.271 ' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:18.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.271 --rc genhtml_branch_coverage=1 00:35:18.271 --rc genhtml_function_coverage=1 00:35:18.271 --rc genhtml_legend=1 00:35:18.271 --rc geninfo_all_blocks=1 00:35:18.271 --rc geninfo_unexecuted_blocks=1 00:35:18.271 00:35:18.271 ' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:18.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.271 --rc genhtml_branch_coverage=1 00:35:18.271 --rc genhtml_function_coverage=1 00:35:18.271 --rc genhtml_legend=1 00:35:18.271 --rc geninfo_all_blocks=1 00:35:18.271 --rc geninfo_unexecuted_blocks=1 00:35:18.271 00:35:18.271 ' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:18.271 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:18.272 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:26.408 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:26.408 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.408 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:26.408 Found net devices under 0000:31:00.0: cvl_0_0 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:26.409 Found net devices under 0000:31:00.1: cvl_0_1 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:26.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:35:26.409 00:35:26.409 --- 10.0.0.2 ping statistics --- 00:35:26.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.409 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:35:26.409 00:35:26.409 --- 10.0.0.1 ping statistics --- 00:35:26.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.409 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=218788 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 218788 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 218788 ']' 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.409 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:26.409 [2024-11-19 11:29:34.548312] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:26.409 [2024-11-19 11:29:34.549303] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:35:26.409 [2024-11-19 11:29:34.549338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.409 [2024-11-19 11:29:34.651037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.409 [2024-11-19 11:29:34.686270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.409 [2024-11-19 11:29:34.686305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.409 [2024-11-19 11:29:34.686313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.409 [2024-11-19 11:29:34.686319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.409 [2024-11-19 11:29:34.686325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.409 [2024-11-19 11:29:34.686913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.409 [2024-11-19 11:29:34.742468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:26.409 [2024-11-19 11:29:34.742724] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:26.979 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.979 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:35:26.979 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:26.979 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.979 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.240 [2024-11-19 11:29:35.359642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.240 [2024-11-19 11:29:35.387910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.240 malloc0 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:27.240 { 00:35:27.240 "params": { 00:35:27.240 "name": "Nvme$subsystem", 00:35:27.240 "trtype": "$TEST_TRANSPORT", 00:35:27.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.240 "adrfam": "ipv4", 00:35:27.240 "trsvcid": "$NVMF_PORT", 00:35:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.240 "hdgst": ${hdgst:-false}, 00:35:27.240 "ddgst": ${ddgst:-false} 00:35:27.240 }, 00:35:27.240 "method": "bdev_nvme_attach_controller" 00:35:27.240 } 00:35:27.240 EOF 00:35:27.240 )") 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:27.240 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:27.240 "params": { 00:35:27.240 "name": "Nvme1", 00:35:27.240 "trtype": "tcp", 00:35:27.240 "traddr": "10.0.0.2", 00:35:27.240 "adrfam": "ipv4", 00:35:27.240 "trsvcid": "4420", 00:35:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.240 "hdgst": false, 00:35:27.240 "ddgst": false 00:35:27.240 }, 00:35:27.240 "method": "bdev_nvme_attach_controller" 00:35:27.240 }' 00:35:27.240 [2024-11-19 11:29:35.461138] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:35:27.240 [2024-11-19 11:29:35.461181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218823 ] 00:35:27.240 [2024-11-19 11:29:35.529767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.240 [2024-11-19 11:29:35.565872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.501 Running I/O for 10 seconds... 00:35:29.822 6629.00 IOPS, 51.79 MiB/s [2024-11-19T10:29:39.113Z] 6667.50 IOPS, 52.09 MiB/s [2024-11-19T10:29:40.054Z] 6692.67 IOPS, 52.29 MiB/s [2024-11-19T10:29:40.997Z] 6702.00 IOPS, 52.36 MiB/s [2024-11-19T10:29:41.939Z] 6701.80 IOPS, 52.36 MiB/s [2024-11-19T10:29:42.883Z] 6880.17 IOPS, 53.75 MiB/s [2024-11-19T10:29:44.268Z] 7287.43 IOPS, 56.93 MiB/s [2024-11-19T10:29:45.210Z] 7590.88 IOPS, 59.30 MiB/s [2024-11-19T10:29:46.152Z] 7826.89 IOPS, 61.15 MiB/s [2024-11-19T10:29:46.152Z] 8015.10 IOPS, 62.62 MiB/s 00:35:37.800 Latency(us) 00:35:37.800 [2024-11-19T10:29:46.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.800 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:37.800 Verification LBA range: start 0x0 length 0x1000 00:35:37.800 Nvme1n1 : 10.01 8018.95 62.65 0.00 0.00 15909.49 2334.72 26105.17 00:35:37.800 [2024-11-19T10:29:46.152Z] =================================================================================================================== 00:35:37.800 [2024-11-19T10:29:46.152Z] Total : 8018.95 62.65 0.00 0.00 15909.49 2334.72 26105.17 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=220831 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:37.800 { 00:35:37.800 "params": { 00:35:37.800 "name": "Nvme$subsystem", 00:35:37.800 "trtype": "$TEST_TRANSPORT", 00:35:37.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.800 "adrfam": "ipv4", 00:35:37.800 "trsvcid": "$NVMF_PORT", 00:35:37.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.800 "hdgst": ${hdgst:-false}, 00:35:37.800 "ddgst": ${ddgst:-false} 00:35:37.800 }, 00:35:37.800 "method": "bdev_nvme_attach_controller" 00:35:37.800 } 00:35:37.800 EOF 00:35:37.800 )") 00:35:37.800 [2024-11-19 11:29:45.979230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.800 [2024-11-19 11:29:45.979261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:37.800 [2024-11-19 11:29:45.987197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.800 [2024-11-19 11:29:45.987207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:37.800 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:37.800 "params": { 00:35:37.800 "name": "Nvme1", 00:35:37.800 "trtype": "tcp", 00:35:37.800 "traddr": "10.0.0.2", 00:35:37.800 "adrfam": "ipv4", 00:35:37.800 "trsvcid": "4420", 00:35:37.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.800 "hdgst": false, 00:35:37.800 "ddgst": false 00:35:37.800 }, 00:35:37.800 "method": "bdev_nvme_attach_controller" 00:35:37.800 }' 00:35:37.800 [2024-11-19 11:29:45.995195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.800 [2024-11-19 11:29:45.995203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.003194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.003201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.011194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.011201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.023195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.023202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.031195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.031201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.034825] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:35:37.801 [2024-11-19 11:29:46.034886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220831 ] 00:35:37.801 [2024-11-19 11:29:46.039194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.039202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.047194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.047201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.055194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.055201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.063195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.063201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.071194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.071201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.079194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.079200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.087194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.087201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.095195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.095202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.103194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.103201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.111194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.111200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.111589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.801 [2024-11-19 11:29:46.119195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.119204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.127194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.127205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.135194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.135202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.143195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.801 [2024-11-19 11:29:46.143203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.801 [2024-11-19 11:29:46.146777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.062 [2024-11-19 11:29:46.151194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.151201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.159198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.159206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.167198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.167207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.175195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.175205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.183194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.183205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.191194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.191203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.199194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.199203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.207194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.207201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.215203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.215215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.223207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.223222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.231195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.231205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.239196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.239205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.247196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.247205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.255195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.255204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.263196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.263206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.271198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.271210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.279233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.279247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 Running I/O for 5 seconds... 00:35:38.062 [2024-11-19 11:29:46.287197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.287209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.298049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.298065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.311383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.311401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.317906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.317922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.331231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.331247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.337723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.337738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.351095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.351111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.364167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.364181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.376141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.376156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.387264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.387279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.393374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.393389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.062 [2024-11-19 11:29:46.402113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.062 [2024-11-19 11:29:46.402127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.415321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.415337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.421693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.421707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.435068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.435083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.448435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.448449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.459668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.459683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.472116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.472138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.484522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.484537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.495679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.495693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.507946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.507960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.520802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.520816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.532143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.532157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.544733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.544748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.555918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.555932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.568458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.568472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.579138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.579154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.585141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.585155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.593860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.593878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.602724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.602739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.615726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.615740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.628083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.628098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.640466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.640480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.651299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.651313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.323 [2024-11-19 11:29:46.657177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.323 [2024-11-19 11:29:46.657192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.324 [2024-11-19 11:29:46.665995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.324 [2024-11-19 11:29:46.666010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.679005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.679024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.692718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.692732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.703119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.703133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.708922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.708936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.722795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.722809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.735695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.735709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.748582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.748597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.759547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.759561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.772035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.772050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.784624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.784639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.795466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.795480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.808546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.808561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.819950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.819965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.832206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.832221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.844324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.844338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.855288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.855302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.861311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.861326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.870310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.870324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.883591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.883605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.896241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.896259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.907159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.907174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.913205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.913220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.922085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.922100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.585 [2024-11-19 11:29:46.934776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.585 [2024-11-19 11:29:46.934791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:46.948205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:46.948219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:46.960470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:46.960484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:46.971350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:46.971365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:46.977422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:46.977437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:46.985846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:46.985861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:46.994585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:46.994599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.007473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.007487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.020172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.020186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.031376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.031391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.037304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.037318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.045775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.045789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.058792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.058807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.071857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.071875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.084242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.084256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.096224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.096243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.108406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.108421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.119756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.119770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.132371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.132385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.143438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.143452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.149424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.149438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.158371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.158385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.171443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.171457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.178061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.178075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.846 [2024-11-19 11:29:47.191129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.846 [2024-11-19 11:29:47.191143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.107 [2024-11-19 11:29:47.197568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.107 [2024-11-19 11:29:47.197582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.107 [2024-11-19 11:29:47.206972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.107 [2024-11-19 11:29:47.206987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.107 [2024-11-19 11:29:47.220158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.107 [2024-11-19 11:29:47.220172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.107 [2024-11-19 11:29:47.232277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.107 [2024-11-19 11:29:47.232292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.107 [2024-11-19 11:29:47.243101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.107 [2024-11-19 11:29:47.243116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.107 [2024-11-19 11:29:47.256223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.107 [2024-11-19 11:29:47.256236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.268311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.268325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.279372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.279387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.285405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.285419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 18932.00 IOPS, 147.91 MiB/s [2024-11-19T10:29:47.460Z] [2024-11-19 11:29:47.294221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.294235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.307471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.307485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.320264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.320278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.329941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.329955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.342564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.342578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.355238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.355252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.361730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.361744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.370166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.370180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.383084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.383098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.395945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.395959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.408415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.408429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.419085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.419099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.431817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.431831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.444236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.444250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.108 [2024-11-19 11:29:47.456244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.108 [2024-11-19 11:29:47.456258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.468449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.468464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.479214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.479228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.485269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.485283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.494058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.494072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.506718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.506732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.519880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.519893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.532563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.532577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.543974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.543988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.556560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.556574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.567647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.567661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.580052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.368 [2024-11-19 11:29:47.580066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.368 [2024-11-19 11:29:47.592310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.592325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.603163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.603177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.609184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.609197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.617559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.617573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.626548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.626562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.639660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.639675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.652495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.652510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.662789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.662803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.675914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.675928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.688852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.688870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.699004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.699018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.369 [2024-11-19 11:29:47.712081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.369 [2024-11-19 11:29:47.712098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.724022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.724036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.736385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.736399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.747425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.747439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.753581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.753595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.762170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.762184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.775041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.775056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.788092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.788106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.800310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.800324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.812057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.812070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.629 [2024-11-19 11:29:47.824303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.629 [2024-11-19 11:29:47.824317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.836220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.836234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.848710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.848724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.860062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.860076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.872629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.872643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.883598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.883612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.896701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.896714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.905831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.905845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.919156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.919170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.925636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.925654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.939218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.939233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.945732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.945746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.958834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.958848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.630 [2024-11-19 11:29:47.972103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.630 [2024-11-19 11:29:47.972117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:47.983663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:47.983677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:47.995837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:47.995851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:48.008265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:48.008278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:48.019281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:48.019295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:48.025158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:48.025172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:48.033973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:48.033987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:48.046876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:48.046890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:48.060060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:48.060074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:48.072200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.891 [2024-11-19 11:29:48.072214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.891 [2024-11-19 11:29:48.084402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.084417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.095662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.095676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.108870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.108885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.119007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.119021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.132400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.132414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.143849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.143871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.156566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.156580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.167057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.167071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.179866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.179881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.192457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.192472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.203458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.203472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.216415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.216429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.227492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.227506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.892 [2024-11-19 11:29:48.240446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.892 [2024-11-19 11:29:48.240462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.153 [2024-11-19 11:29:48.251489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.153 [2024-11-19 11:29:48.251504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.153 [2024-11-19 11:29:48.264427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.153 [2024-11-19 11:29:48.264442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.153 [2024-11-19 11:29:48.276322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.153 [2024-11-19 11:29:48.276337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.153 [2024-11-19 11:29:48.287213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.153 [2024-11-19 11:29:48.287227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.153 19031.50 IOPS, 148.68 MiB/s [2024-11-19T10:29:48.505Z] [2024-11-19 11:29:48.299092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.153 [2024-11-19 11:29:48.299106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.312350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.312364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.323387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.323402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.329497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.329512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.338612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.338626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.351667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.351681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.364360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.364374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.376612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.376627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.387523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.387538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.400332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.400347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.411412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.411427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.417352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.417366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.426185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.426199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.439206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.439221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.445582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.445597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.454915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.454930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.467785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.467799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.480356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.480370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.154 [2024-11-19 11:29:48.492196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.154 [2024-11-19 11:29:48.492211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.504301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.504315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.516138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.516153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.528251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.528266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.539843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.539858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.552483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.552497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.563316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.563331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.569349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.569363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.582823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.582837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.596152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.596167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.608142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.608156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.620109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.620124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.631353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.631369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.415 [2024-11-19 11:29:48.637248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.415 [2024-11-19 11:29:48.637263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.645784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.645799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.658871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.658886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.671697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.671711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.684456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.684470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.695334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.695349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.701392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.701406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.710428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.710443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.723270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.723285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.729543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.729557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.742542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.742557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.416 [2024-11-19 11:29:48.755484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.416 [2024-11-19 11:29:48.755498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.768406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.768421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.779273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.779288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.785186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.785200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.794091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.794106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.806908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.806922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.819828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.819843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.832851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.832871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.843179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.843194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.849000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.849015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.858746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.858760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.871999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.872013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.884743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.884757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.894958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.894973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.907782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.907796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.920527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.920541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.931894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.931908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.944223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.944237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.956516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.956530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.968479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.968493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.980057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.980071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:48.992407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:48.992421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:49.003208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:49.003222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:49.009132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:49.009146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.678 [2024-11-19 11:29:49.018754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.678 [2024-11-19 11:29:49.018769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.031883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.031897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.044552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.044566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.055370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.055384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.061333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.061347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.070376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.070390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.083158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.083173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.089778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.089792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.096914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.096929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.108118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.108132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.120273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.120287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.132104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.132118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.144387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.144402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.155216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.155230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.161102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.161116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.170099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.170117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.183119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.183134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.189454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.189468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.198691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.198705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.211802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.211817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.224683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.224697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.235061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.235075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.248178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.248192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.259196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.259210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.265144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.265158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.940 [2024-11-19 11:29:49.279047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.940 [2024-11-19 11:29:49.279062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.201 [2024-11-19 11:29:49.292250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.201 [2024-11-19 11:29:49.292265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.201 19022.33 IOPS, 148.61 MiB/s [2024-11-19T10:29:49.553Z] [2024-11-19 11:29:49.302851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.201 [2024-11-19 11:29:49.302869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.201 [2024-11-19 11:29:49.315718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.201 [2024-11-19 11:29:49.315732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.201 [2024-11-19 11:29:49.328490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.201 [2024-11-19 11:29:49.328504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.339310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.339325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.345378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.345393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.354039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.354053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.367013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.367028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.379845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.379867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.392273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.392287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.404296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.404311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.415368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.415383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.421236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.421251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.430265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.430279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.443297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.443311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.449450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.449464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.458359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.458373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.471414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.471428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.478235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.478249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.491538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.491552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.503978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.503992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.516720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.516734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.528002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.528015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.540240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.540254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.202 [2024-11-19 11:29:49.551023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.202 [2024-11-19 11:29:49.551037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.564042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.564056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.576388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.576403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.587350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.587371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.593237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.593251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.605892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.605907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.619187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.619202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.625632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.625647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.638894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.638908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.652247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.652261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.664050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.664064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.675139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.675154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.681097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.681111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.689930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.689944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.702815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.702829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.715991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.716005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.728395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.728409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.739374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.739388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.745384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.745398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.754227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.754241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.767312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.767327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.773665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.773679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.782818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.782832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.795412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.795426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.463 [2024-11-19 11:29:49.801893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.463 [2024-11-19 11:29:49.801907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.725 [2024-11-19 11:29:49.815034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.725 [2024-11-19 11:29:49.815049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.725 [2024-11-19 11:29:49.827903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.725 [2024-11-19 11:29:49.827917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.725 [2024-11-19 11:29:49.840414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.725 [2024-11-19 11:29:49.840428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.725 [2024-11-19 11:29:49.851434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.725 [2024-11-19 11:29:49.851449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.725 [2024-11-19 11:29:49.857439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.725 [2024-11-19 11:29:49.857453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.870402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.870417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.883649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.883664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.896254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.896268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.907210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.907225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.913089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.913103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.922601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.922615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.935490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.935504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.947989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.948003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.958116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.958131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.970738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.970753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.983207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.983222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:49.996185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:49.996200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:50.008359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:50.008375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:50.020572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:50.020587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:50.031490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:50.031504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:50.044564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:50.044579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:50.055584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:50.055599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.726 [2024-11-19 11:29:50.068290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.726 [2024-11-19 11:29:50.068305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.987 [2024-11-19 11:29:50.079232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.987 [2024-11-19 11:29:50.079247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.987 [2024-11-19 11:29:50.085282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.987 [2024-11-19 11:29:50.085297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.987 [2024-11-19 11:29:50.094376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.987 [2024-11-19 11:29:50.094391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.987 [2024-11-19 11:29:50.107369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.107385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.114008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.114023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.126920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.126935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.140362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.140376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.151443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.151458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.157686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.157701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.164912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.164926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.174155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.174169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.187179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.187193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.193572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.193587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.206555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.206570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.219474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.219489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.232401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.232415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.243313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.243328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.249339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.249353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.257701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.257716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.266469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.266483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.279397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.279411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.285684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.285698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.294260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.294274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 19031.00 IOPS, 148.68 MiB/s [2024-11-19T10:29:50.340Z] [2024-11-19 11:29:50.308390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.308405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.319658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.319672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.988 [2024-11-19 11:29:50.332224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.988 [2024-11-19 11:29:50.332238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.342935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.342950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.355961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.355976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.368764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.368779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.379556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.379570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.392462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.392480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.403453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.403467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.409541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.409556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.422888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.422903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.435583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.435597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.448488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.448502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.459143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.459159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.465128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.465143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.473837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.473852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.487138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.487153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.493510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.493525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.502642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.502657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.515650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.515665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.528487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.528502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.537932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.537946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.249 [2024-11-19 11:29:50.545767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.249 [2024-11-19 11:29:50.545781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.250 [2024-11-19 11:29:50.553580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.250 [2024-11-19 11:29:50.553594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.250 [2024-11-19 11:29:50.562501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.250 [2024-11-19 11:29:50.562516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.250 [2024-11-19 11:29:50.575477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.250 [2024-11-19 11:29:50.575491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.250 [2024-11-19 11:29:50.588225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.250 [2024-11-19 11:29:50.588243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.600328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.600343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.611090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.611105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.623919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.623933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.636309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.636323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.647418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.647432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.653692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.653706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.666767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.666781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.679579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.679593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.692274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.692288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.703083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.703098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.716069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.716083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.728066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.728080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.740307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.740321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.752472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.752486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.763170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.763184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.776045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.776059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.788425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.788439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.799183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.799197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.805046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.805064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.813975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.813989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.826837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.826851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.840153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.840167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.511 [2024-11-19 11:29:50.852413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.511 [2024-11-19 11:29:50.852428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.863159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.863173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.869129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.869143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.878048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.878062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.891161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.891175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.903923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.903937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.916236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.916250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.928101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.928115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.940491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.940505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.951694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.951708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.964611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.964626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.975829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.975843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:50.987988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:50.988002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.000821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.000835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.011124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.011138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.017168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.017186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.026079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.026093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.039149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.039164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.045483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.045498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.054822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.054837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.067490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.067504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.080388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.080402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.092659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.092673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.103995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.104010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.773 [2024-11-19 11:29:51.116425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.773 [2024-11-19 11:29:51.116439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.128512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.128527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.139561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.139575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.152274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.152288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.163195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.163209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.169233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.169247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.182325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.182339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.195079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.195094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.207790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.207804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.220466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.220480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.232498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.232512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.250762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.250777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.264208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.264222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.275041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.275056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.287824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.287838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 [2024-11-19 11:29:51.300112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.034 [2024-11-19 11:29:51.300126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.034 19054.40 IOPS, 148.86 MiB/s 00:35:43.034 Latency(us) 00:35:43.034 [2024-11-19T10:29:51.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.035 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:43.035 Nvme1n1 : 5.01 19053.58 148.86 0.00 0.00 6712.22 2648.75 12506.45 00:35:43.035 [2024-11-19T10:29:51.387Z] =================================================================================================================== 00:35:43.035 [2024-11-19T10:29:51.387Z] Total : 19053.58 148.86 0.00 0.00 6712.22 2648.75 12506.45 00:35:43.035 [2024-11-19 11:29:51.307201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.307214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.315198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.315210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.323199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.323209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.331199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.331210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.339200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.339210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.347197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.347209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.355198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.355209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.363195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.363203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.371195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.371203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.035 [2024-11-19 11:29:51.379195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.035 [2024-11-19 11:29:51.379202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.295 [2024-11-19 11:29:51.387195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.295 [2024-11-19 11:29:51.387203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.295 [2024-11-19 11:29:51.395197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.295 [2024-11-19 11:29:51.395205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.295 [2024-11-19 11:29:51.403196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.295 [2024-11-19 11:29:51.403204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.295 [2024-11-19 11:29:51.411194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.295 [2024-11-19 11:29:51.411201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.295 [2024-11-19 11:29:51.419195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.295 [2024-11-19 11:29:51.419202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (220831) - No such process 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 220831 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:43.295 delay0 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.295 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:43.296 [2024-11-19 11:29:51.556296] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:49.879 Initializing NVMe Controllers 00:35:49.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:49.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:49.879 Initialization complete. Launching workers. 00:35:49.879 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 296, failed: 8578 00:35:49.879 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 8798, failed to submit 76 00:35:49.879 success 8679, unsuccessful 119, failed 0 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:49.879 rmmod nvme_tcp 00:35:49.879 rmmod nvme_fabrics 00:35:49.879 rmmod nvme_keyring 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 218788 ']' 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 218788 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 218788 ']' 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 218788 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.879 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 218788 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 218788' 00:35:50.139 killing process with pid 218788 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 218788 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 218788 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.139 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:52.682 00:35:52.682 real 0m34.287s 00:35:52.682 user 0m42.794s 00:35:52.682 sys 0m12.060s 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:52.682 ************************************ 00:35:52.682 END TEST nvmf_zcopy 00:35:52.682 ************************************ 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:52.682 ************************************ 00:35:52.682 START TEST nvmf_nmic 00:35:52.682 ************************************ 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:52.682 * Looking for test storage... 00:35:52.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.682 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:52.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.683 --rc genhtml_branch_coverage=1 00:35:52.683 --rc genhtml_function_coverage=1 00:35:52.683 --rc genhtml_legend=1 00:35:52.683 --rc geninfo_all_blocks=1 00:35:52.683 --rc geninfo_unexecuted_blocks=1 00:35:52.683 00:35:52.683 ' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:52.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.683 --rc genhtml_branch_coverage=1 00:35:52.683 --rc genhtml_function_coverage=1 00:35:52.683 --rc genhtml_legend=1 00:35:52.683 --rc geninfo_all_blocks=1 00:35:52.683 --rc geninfo_unexecuted_blocks=1 00:35:52.683 00:35:52.683 ' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:52.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.683 --rc genhtml_branch_coverage=1 00:35:52.683 --rc genhtml_function_coverage=1 00:35:52.683 --rc genhtml_legend=1 00:35:52.683 --rc geninfo_all_blocks=1 00:35:52.683 --rc geninfo_unexecuted_blocks=1 00:35:52.683 00:35:52.683 ' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:52.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.683 --rc genhtml_branch_coverage=1 00:35:52.683 --rc genhtml_function_coverage=1 00:35:52.683 --rc genhtml_legend=1 00:35:52.683 --rc geninfo_all_blocks=1 00:35:52.683 --rc geninfo_unexecuted_blocks=1 00:35:52.683 00:35:52.683 ' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:52.683 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:52.684 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:52.684 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:00.842 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:00.842 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.842 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:00.843 Found net devices under 0000:31:00.0: cvl_0_0 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:00.843 Found net devices under 0000:31:00.1: cvl_0_1 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:00.843 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:00.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:00.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:36:00.843 00:36:00.843 --- 10.0.0.2 ping statistics --- 00:36:00.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.843 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:00.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:00.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:36:00.843 00:36:00.843 --- 10.0.0.1 ping statistics --- 00:36:00.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.843 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=228186 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 228186 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 228186 ']' 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.843 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.151 [2024-11-19 11:30:09.234479] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:01.151 [2024-11-19 11:30:09.235525] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:36:01.151 [2024-11-19 11:30:09.235570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.151 [2024-11-19 11:30:09.328599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.151 [2024-11-19 11:30:09.368788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.151 [2024-11-19 11:30:09.368825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.151 [2024-11-19 11:30:09.368834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.151 [2024-11-19 11:30:09.368841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.151 [2024-11-19 11:30:09.368847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.151 [2024-11-19 11:30:09.370409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.151 [2024-11-19 11:30:09.370525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.151 [2024-11-19 11:30:09.370839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.151 [2024-11-19 11:30:09.370842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.151 [2024-11-19 11:30:09.426587] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:01.151 [2024-11-19 11:30:09.426742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:01.151 [2024-11-19 11:30:09.427479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:01.151 [2024-11-19 11:30:09.428118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:01.151 [2024-11-19 11:30:09.428234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:01.825 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.825 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:01.825 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:01.825 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 [2024-11-19 11:30:10.099374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 Malloc0 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.826 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:02.088 [2024-11-19 11:30:10.183645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:02.088 test case1: single bdev can't be used in multiple subsystems 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:02.088 [2024-11-19 11:30:10.219285] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:02.088 [2024-11-19 11:30:10.219318] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:02.088 [2024-11-19 11:30:10.219328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.088 request: 00:36:02.088 { 00:36:02.088 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:02.088 "namespace": { 00:36:02.088 "bdev_name": "Malloc0", 00:36:02.088 "no_auto_visible": false 00:36:02.088 }, 00:36:02.088 "method": "nvmf_subsystem_add_ns", 00:36:02.088 "req_id": 1 00:36:02.088 } 00:36:02.088 Got JSON-RPC error response 00:36:02.088 response: 00:36:02.088 { 00:36:02.088 "code": -32602, 00:36:02.088 "message": "Invalid parameters" 00:36:02.088 } 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:02.088 Adding namespace failed - expected result. 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:02.088 test case2: host connect to nvmf target in multiple paths 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:02.088 [2024-11-19 11:30:10.231419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.088 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:02.350 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:02.921 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:02.921 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:02.921 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:02.921 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:02.921 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:04.834 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:04.834 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:04.834 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:04.834 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:04.834 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:04.834 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:04.834 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:04.834 [global] 00:36:04.834 thread=1 00:36:04.834 invalidate=1 00:36:04.834 rw=write 00:36:04.834 time_based=1 00:36:04.834 runtime=1 00:36:04.834 ioengine=libaio 00:36:04.834 direct=1 00:36:04.834 bs=4096 00:36:04.834 iodepth=1 00:36:04.834 norandommap=0 00:36:04.834 numjobs=1 00:36:04.834 00:36:04.834 verify_dump=1 00:36:04.834 verify_backlog=512 00:36:04.834 verify_state_save=0 00:36:04.834 do_verify=1 00:36:04.834 verify=crc32c-intel 00:36:04.834 [job0] 00:36:04.834 filename=/dev/nvme0n1 00:36:04.834 Could not set queue depth (nvme0n1) 00:36:05.094 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:05.094 fio-3.35 00:36:05.094 Starting 1 thread 00:36:06.480 00:36:06.480 job0: (groupid=0, jobs=1): err= 0: pid=229356: Tue Nov 19 11:30:14 2024 00:36:06.480 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:06.480 slat (nsec): min=7486, max=59448, avg=27491.20, stdev=3611.58 00:36:06.480 clat (usec): min=801, max=1308, avg=1095.38, stdev=65.74 00:36:06.480 lat (usec): min=829, max=1335, avg=1122.87, stdev=65.63 00:36:06.480 clat percentiles (usec): 00:36:06.480 | 1.00th=[ 906], 5.00th=[ 979], 10.00th=[ 1020], 20.00th=[ 1045], 00:36:06.480 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:36:06.480 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:36:06.480 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1303], 00:36:06.480 | 99.99th=[ 1303] 00:36:06.480 write: IOPS=684, BW=2737KiB/s (2803kB/s)(2740KiB/1001msec); 0 zone resets 00:36:06.480 slat (nsec): min=9005, max=64022, avg=29439.59, stdev=10289.96 00:36:06.480 clat (usec): min=297, max=994, avg=578.23, stdev=98.51 00:36:06.480 lat (usec): min=306, max=1029, avg=607.67, stdev=103.69 00:36:06.480 clat percentiles (usec): 00:36:06.480 | 1.00th=[ 343], 5.00th=[ 408], 10.00th=[ 441], 20.00th=[ 494], 00:36:06.480 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 603], 00:36:06.480 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 734], 00:36:06.480 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 996], 99.95th=[ 996], 00:36:06.480 | 99.99th=[ 996] 00:36:06.480 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:06.480 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:06.480 lat (usec) : 500=12.20%, 750=43.61%, 1000=4.76% 00:36:06.480 lat (msec) : 2=39.43% 00:36:06.480 cpu : usr=3.30%, sys=3.80%, ctx=1197, majf=0, minf=1 00:36:06.480 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.480 issued rwts: total=512,685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.480 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:06.480 00:36:06.480 Run status group 0 (all jobs): 00:36:06.480 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:36:06.480 WRITE: bw=2737KiB/s (2803kB/s), 2737KiB/s-2737KiB/s (2803kB/s-2803kB/s), io=2740KiB (2806kB), run=1001-1001msec 00:36:06.480 00:36:06.480 Disk stats (read/write): 00:36:06.480 nvme0n1: ios=562/519, merge=0/0, ticks=540/230, in_queue=770, util=92.99% 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:06.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:06.480 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:06.480 rmmod nvme_tcp 00:36:06.480 rmmod nvme_fabrics 00:36:06.741 rmmod nvme_keyring 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 228186 ']' 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 228186 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 228186 ']' 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 228186 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228186 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228186' 00:36:06.741 killing process with pid 228186 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 228186 00:36:06.741 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 228186 00:36:06.741 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:06.741 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:06.741 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:06.741 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:07.002 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:07.002 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:07.002 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:07.002 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:07.002 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:07.002 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.002 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:07.002 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.916 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:08.916 00:36:08.916 real 0m16.677s 00:36:08.916 user 0m37.651s 00:36:08.916 sys 0m8.227s 00:36:08.916 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:08.916 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:08.916 ************************************ 00:36:08.916 END TEST nvmf_nmic 00:36:08.916 ************************************ 00:36:08.916 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:08.916 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:08.916 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.916 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:08.916 ************************************ 00:36:08.916 START TEST nvmf_fio_target 00:36:08.916 ************************************ 00:36:08.916 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:09.179 * Looking for test storage... 00:36:09.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.179 --rc genhtml_branch_coverage=1 00:36:09.179 --rc genhtml_function_coverage=1 00:36:09.179 --rc genhtml_legend=1 00:36:09.179 --rc geninfo_all_blocks=1 00:36:09.179 --rc geninfo_unexecuted_blocks=1 00:36:09.179 00:36:09.179 ' 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.179 --rc genhtml_branch_coverage=1 00:36:09.179 --rc genhtml_function_coverage=1 00:36:09.179 --rc genhtml_legend=1 00:36:09.179 --rc geninfo_all_blocks=1 00:36:09.179 --rc geninfo_unexecuted_blocks=1 00:36:09.179 00:36:09.179 ' 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.179 --rc genhtml_branch_coverage=1 00:36:09.179 --rc genhtml_function_coverage=1 00:36:09.179 --rc genhtml_legend=1 00:36:09.179 --rc geninfo_all_blocks=1 00:36:09.179 --rc geninfo_unexecuted_blocks=1 00:36:09.179 00:36:09.179 ' 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.179 --rc genhtml_branch_coverage=1 00:36:09.179 --rc genhtml_function_coverage=1 00:36:09.179 --rc genhtml_legend=1 00:36:09.179 --rc geninfo_all_blocks=1 00:36:09.179 --rc geninfo_unexecuted_blocks=1 00:36:09.179 00:36:09.179 ' 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:09.179 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:09.180 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:19.195 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:19.195 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.195 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:19.196 Found net devices under 0000:31:00.0: cvl_0_0 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:19.196 Found net devices under 0000:31:00.1: cvl_0_1 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:19.196 11:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:19.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:36:19.196 00:36:19.196 --- 10.0.0.2 ping statistics --- 00:36:19.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.196 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:19.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:36:19.196 00:36:19.196 --- 10.0.0.1 ping statistics --- 00:36:19.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.196 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=234317 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 234317 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 234317 ']' 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.196 [2024-11-19 11:30:26.149001] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:19.196 [2024-11-19 11:30:26.150155] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:36:19.196 [2024-11-19 11:30:26.150207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.196 [2024-11-19 11:30:26.243769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:19.196 [2024-11-19 11:30:26.286613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.196 [2024-11-19 11:30:26.286649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.196 [2024-11-19 11:30:26.286657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.196 [2024-11-19 11:30:26.286664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.196 [2024-11-19 11:30:26.286670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.196 [2024-11-19 11:30:26.288525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.196 [2024-11-19 11:30:26.288665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.196 [2024-11-19 11:30:26.288827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.196 [2024-11-19 11:30:26.288828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:19.196 [2024-11-19 11:30:26.345919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:19.196 [2024-11-19 11:30:26.346006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:19.196 [2024-11-19 11:30:26.346983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:19.196 [2024-11-19 11:30:26.347444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:19.196 [2024-11-19 11:30:26.347542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:19.196 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.197 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.197 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:19.197 [2024-11-19 11:30:27.153295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.197 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:19.197 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:19.197 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:19.457 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:19.457 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:19.457 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:19.457 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:19.718 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:19.718 11:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:19.979 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:19.979 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:19.979 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.241 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:20.241 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.503 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:20.503 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:20.503 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:20.763 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:20.763 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:21.024 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:21.024 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:21.024 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:21.287 [2024-11-19 11:30:29.481436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.287 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:21.548 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:21.548 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:22.121 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:22.121 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:36:22.121 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:22.121 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:36:22.121 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:36:22.121 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:36:24.034 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:24.034 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:24.034 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:24.034 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:36:24.034 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:24.034 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:36:24.034 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:24.034 [global] 00:36:24.034 thread=1 00:36:24.034 invalidate=1 00:36:24.034 rw=write 00:36:24.034 time_based=1 00:36:24.034 runtime=1 00:36:24.034 ioengine=libaio 00:36:24.034 direct=1 00:36:24.034 bs=4096 00:36:24.034 iodepth=1 00:36:24.034 norandommap=0 00:36:24.035 numjobs=1 00:36:24.035 00:36:24.035 verify_dump=1 00:36:24.035 verify_backlog=512 00:36:24.035 verify_state_save=0 00:36:24.035 do_verify=1 00:36:24.035 verify=crc32c-intel 00:36:24.035 [job0] 00:36:24.035 filename=/dev/nvme0n1 00:36:24.035 [job1] 00:36:24.035 filename=/dev/nvme0n2 00:36:24.035 [job2] 00:36:24.035 filename=/dev/nvme0n3 00:36:24.035 [job3] 00:36:24.035 filename=/dev/nvme0n4 00:36:24.035 Could not set queue depth (nvme0n1) 00:36:24.035 Could not set queue depth (nvme0n2) 00:36:24.035 Could not set queue depth (nvme0n3) 00:36:24.035 Could not set queue depth (nvme0n4) 00:36:24.610 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:24.610 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:24.610 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:24.611 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:24.611 fio-3.35 00:36:24.611 Starting 4 threads 00:36:25.996 00:36:25.996 job0: (groupid=0, jobs=1): err= 0: pid=235893: Tue Nov 19 11:30:33 2024 00:36:25.996 read: IOPS=18, BW=75.5KiB/s (77.4kB/s)(76.0KiB/1006msec) 00:36:25.996 slat (nsec): min=25876, max=28828, avg=26287.21, stdev=635.72 00:36:25.996 clat (usec): min=894, max=41980, avg=38978.99, stdev=9227.86 00:36:25.996 lat (usec): min=923, max=42006, avg=39005.28, stdev=9227.25 00:36:25.996 clat percentiles (usec): 00:36:25.996 | 1.00th=[ 898], 5.00th=[ 898], 10.00th=[40633], 20.00th=[41157], 00:36:25.996 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:25.996 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:25.996 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:25.996 | 99.99th=[42206] 00:36:25.996 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:36:25.996 slat (nsec): min=9658, max=77712, avg=30761.27, stdev=8930.38 00:36:25.996 clat (usec): min=117, max=819, avg=479.48, stdev=107.49 00:36:25.996 lat (usec): min=127, max=853, avg=510.24, stdev=110.55 00:36:25.996 clat percentiles (usec): 00:36:25.996 | 1.00th=[ 249], 5.00th=[ 302], 10.00th=[ 326], 20.00th=[ 371], 00:36:25.996 | 30.00th=[ 420], 40.00th=[ 465], 50.00th=[ 490], 60.00th=[ 515], 00:36:25.996 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 619], 95.00th=[ 652], 00:36:25.996 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 824], 99.95th=[ 824], 00:36:25.996 | 99.99th=[ 824] 00:36:25.996 bw ( KiB/s): min= 4096, max= 4096, per=46.75%, avg=4096.00, stdev= 0.00, samples=1 00:36:25.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:25.996 lat (usec) : 250=1.13%, 500=50.66%, 750=44.44%, 1000=0.38% 00:36:25.996 lat (msec) : 50=3.39% 00:36:25.996 cpu : usr=0.70%, sys=1.59%, ctx=532, majf=0, minf=1 00:36:25.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.996 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:25.996 job1: (groupid=0, jobs=1): err= 0: pid=235894: Tue Nov 19 11:30:33 2024 00:36:25.996 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1019msec) 00:36:25.996 slat (nsec): min=24994, max=26016, avg=25590.41, stdev=276.14 00:36:25.996 clat (usec): min=1310, max=42116, avg=39568.45, stdev=9859.23 00:36:25.996 lat (usec): min=1336, max=42142, avg=39594.04, stdev=9859.24 00:36:25.996 clat percentiles (usec): 00:36:25.996 | 1.00th=[ 1303], 5.00th=[ 1303], 10.00th=[41681], 20.00th=[41681], 00:36:25.996 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:25.996 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:25.996 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:25.996 | 99.99th=[42206] 00:36:25.996 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:36:25.996 slat (nsec): min=9543, max=50635, avg=30267.04, stdev=8190.00 00:36:25.996 clat (usec): min=274, max=933, avg=638.37, stdev=112.46 00:36:25.996 lat (usec): min=288, max=965, avg=668.64, stdev=114.96 00:36:25.996 clat percentiles (usec): 00:36:25.996 | 1.00th=[ 375], 5.00th=[ 449], 10.00th=[ 486], 20.00th=[ 545], 00:36:25.996 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 676], 00:36:25.996 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 824], 00:36:25.996 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 930], 99.95th=[ 930], 00:36:25.996 | 99.99th=[ 930] 00:36:25.996 bw ( KiB/s): min= 4096, max= 4096, per=46.75%, avg=4096.00, stdev= 0.00, samples=1 00:36:25.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:25.996 lat (usec) : 500=13.04%, 750=70.32%, 1000=13.42% 00:36:25.996 lat (msec) : 2=0.19%, 50=3.02% 00:36:25.996 cpu : usr=1.18%, sys=1.08%, ctx=529, majf=0, minf=2 00:36:25.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.996 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:25.996 job2: (groupid=0, jobs=1): err= 0: pid=235898: Tue Nov 19 11:30:33 2024 00:36:25.996 read: IOPS=443, BW=1774KiB/s (1817kB/s)(1776KiB/1001msec) 00:36:25.996 slat (nsec): min=7948, max=60759, avg=26502.22, stdev=3716.19 00:36:25.996 clat (usec): min=501, max=41897, avg=1476.91, stdev=4256.16 00:36:25.996 lat (usec): min=527, max=41923, avg=1503.42, stdev=4256.13 00:36:25.996 clat percentiles (usec): 00:36:25.996 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 889], 00:36:25.996 | 30.00th=[ 938], 40.00th=[ 996], 50.00th=[ 1045], 60.00th=[ 1090], 00:36:25.996 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:36:25.996 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:25.996 | 99.99th=[41681] 00:36:25.996 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:36:25.996 slat (nsec): min=10201, max=68935, avg=32770.45, stdev=7817.77 00:36:25.996 clat (usec): min=216, max=1060, avg=595.54, stdev=138.13 00:36:25.996 lat (usec): min=228, max=1096, avg=628.31, stdev=140.10 00:36:25.996 clat percentiles (usec): 00:36:25.996 | 1.00th=[ 265], 5.00th=[ 355], 10.00th=[ 404], 20.00th=[ 478], 00:36:25.996 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:36:25.996 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 807], 00:36:25.996 | 99.00th=[ 922], 99.50th=[ 1004], 99.90th=[ 1057], 99.95th=[ 1057], 00:36:25.996 | 99.99th=[ 1057] 00:36:25.996 bw ( KiB/s): min= 4096, max= 4096, per=46.75%, avg=4096.00, stdev= 0.00, samples=1 00:36:25.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:25.996 lat (usec) : 250=0.42%, 500=12.24%, 750=35.88%, 1000=23.95% 00:36:25.996 lat (msec) : 2=26.99%, 50=0.52% 00:36:25.996 cpu : usr=1.50%, sys=2.90%, ctx=959, majf=0, minf=1 00:36:25.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.996 issued rwts: total=444,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:25.996 job3: (groupid=0, jobs=1): err= 0: pid=235899: Tue Nov 19 11:30:33 2024 00:36:25.996 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:25.996 slat (nsec): min=24290, max=60996, avg=26264.92, stdev=4705.94 00:36:25.996 clat (usec): min=793, max=1302, avg=1033.29, stdev=88.50 00:36:25.996 lat (usec): min=819, max=1327, avg=1059.56, stdev=88.44 00:36:25.996 clat percentiles (usec): 00:36:25.996 | 1.00th=[ 824], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 963], 00:36:25.996 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1057], 00:36:25.996 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:36:25.996 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1303], 00:36:25.996 | 99.99th=[ 1303] 00:36:25.996 write: IOPS=695, BW=2781KiB/s (2848kB/s)(2784KiB/1001msec); 0 zone resets 00:36:25.996 slat (nsec): min=9644, max=58544, avg=31307.36, stdev=7874.08 00:36:25.996 clat (usec): min=193, max=1038, avg=612.42, stdev=148.60 00:36:25.996 lat (usec): min=226, max=1090, avg=643.73, stdev=151.29 00:36:25.996 clat percentiles (usec): 00:36:25.996 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 420], 20.00th=[ 482], 00:36:25.996 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:36:25.996 | 70.00th=[ 676], 80.00th=[ 742], 90.00th=[ 816], 95.00th=[ 873], 00:36:25.996 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1037], 99.95th=[ 1037], 00:36:25.996 | 99.99th=[ 1037] 00:36:25.996 bw ( KiB/s): min= 4096, max= 4096, per=46.75%, avg=4096.00, stdev= 0.00, samples=1 00:36:25.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:25.996 lat (usec) : 250=0.41%, 500=13.25%, 750=33.11%, 1000=25.75% 00:36:25.996 lat (msec) : 2=27.48% 00:36:25.996 cpu : usr=1.20%, sys=4.30%, ctx=1208, majf=0, minf=1 00:36:25.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.997 issued rwts: total=512,696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:25.997 00:36:25.997 Run status group 0 (all jobs): 00:36:25.997 READ: bw=3894KiB/s (3987kB/s), 66.7KiB/s-2046KiB/s (68.3kB/s-2095kB/s), io=3968KiB (4063kB), run=1001-1019msec 00:36:25.997 WRITE: bw=8762KiB/s (8972kB/s), 2010KiB/s-2781KiB/s (2058kB/s-2848kB/s), io=8928KiB (9142kB), run=1001-1019msec 00:36:25.997 00:36:25.997 Disk stats (read/write): 00:36:25.997 nvme0n1: ios=64/512, merge=0/0, ticks=599/232, in_queue=831, util=87.58% 00:36:25.997 nvme0n2: ios=48/512, merge=0/0, ticks=509/315, in_queue=824, util=87.64% 00:36:25.997 nvme0n3: ios=313/512, merge=0/0, ticks=1436/299, in_queue=1735, util=96.40% 00:36:25.997 nvme0n4: ios=466/512, merge=0/0, ticks=459/297, in_queue=756, util=89.50% 00:36:25.997 11:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:25.997 [global] 00:36:25.997 thread=1 00:36:25.997 invalidate=1 00:36:25.997 rw=randwrite 00:36:25.997 time_based=1 00:36:25.997 runtime=1 00:36:25.997 ioengine=libaio 00:36:25.997 direct=1 00:36:25.997 bs=4096 00:36:25.997 iodepth=1 00:36:25.997 norandommap=0 00:36:25.997 numjobs=1 00:36:25.997 00:36:25.997 verify_dump=1 00:36:25.997 verify_backlog=512 00:36:25.997 verify_state_save=0 00:36:25.997 do_verify=1 00:36:25.997 verify=crc32c-intel 00:36:25.997 [job0] 00:36:25.997 filename=/dev/nvme0n1 00:36:25.997 [job1] 00:36:25.997 filename=/dev/nvme0n2 00:36:25.997 [job2] 00:36:25.997 filename=/dev/nvme0n3 00:36:25.997 [job3] 00:36:25.997 filename=/dev/nvme0n4 00:36:25.997 Could not set queue depth (nvme0n1) 00:36:25.997 Could not set queue depth (nvme0n2) 00:36:25.997 Could not set queue depth (nvme0n3) 00:36:25.997 Could not set queue depth (nvme0n4) 00:36:25.997 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.997 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.997 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.997 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:25.997 fio-3.35 00:36:25.997 Starting 4 threads 00:36:27.383 00:36:27.384 job0: (groupid=0, jobs=1): err= 0: pid=236411: Tue Nov 19 11:30:35 2024 00:36:27.384 read: IOPS=15, BW=62.4KiB/s (63.9kB/s)(64.0KiB/1026msec) 00:36:27.384 slat (nsec): min=24638, max=25513, avg=24852.44, stdev=218.99 00:36:27.384 clat (usec): min=41635, max=42051, avg=41951.37, stdev=90.89 00:36:27.384 lat (usec): min=41659, max=42076, avg=41976.23, stdev=90.93 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:36:27.384 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:27.384 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:27.384 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:27.384 | 99.99th=[42206] 00:36:27.384 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:36:27.384 slat (nsec): min=9132, max=51338, avg=29003.81, stdev=8092.78 00:36:27.384 clat (usec): min=199, max=976, avg=655.45, stdev=139.21 00:36:27.384 lat (usec): min=230, max=1007, avg=684.46, stdev=141.89 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[ 322], 5.00th=[ 408], 10.00th=[ 474], 20.00th=[ 537], 00:36:27.384 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:36:27.384 | 70.00th=[ 750], 80.00th=[ 791], 90.00th=[ 832], 95.00th=[ 865], 00:36:27.384 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:36:27.384 | 99.99th=[ 979] 00:36:27.384 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:36:27.384 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:27.384 lat (usec) : 250=0.19%, 500=13.07%, 750=53.98%, 1000=29.73% 00:36:27.384 lat (msec) : 50=3.03% 00:36:27.384 cpu : usr=0.88%, sys=1.37%, ctx=528, majf=0, minf=1 00:36:27.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:27.384 job1: (groupid=0, jobs=1): err= 0: pid=236412: Tue Nov 19 11:30:35 2024 00:36:27.384 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:36:27.384 slat (nsec): min=25136, max=25746, avg=25413.59, stdev=191.00 00:36:27.384 clat (usec): min=1032, max=42033, avg=38956.99, stdev=9781.80 00:36:27.384 lat (usec): min=1058, max=42059, avg=38982.40, stdev=9781.83 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[ 1037], 5.00th=[ 1037], 10.00th=[41157], 20.00th=[41157], 00:36:27.384 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:27.384 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:36:27.384 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:27.384 | 99.99th=[42206] 00:36:27.384 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:36:27.384 slat (nsec): min=9242, max=52221, avg=29158.74, stdev=7937.76 00:36:27.384 clat (usec): min=280, max=1047, avg=640.86, stdev=119.35 00:36:27.384 lat (usec): min=291, max=1080, avg=670.02, stdev=121.53 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[ 371], 5.00th=[ 437], 10.00th=[ 486], 20.00th=[ 537], 00:36:27.384 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:36:27.384 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 799], 95.00th=[ 824], 00:36:27.384 | 99.00th=[ 889], 99.50th=[ 963], 99.90th=[ 1045], 99.95th=[ 1045], 00:36:27.384 | 99.99th=[ 1045] 00:36:27.384 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:36:27.384 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:27.384 lat (usec) : 500=12.10%, 750=67.30%, 1000=17.01% 00:36:27.384 lat (msec) : 2=0.57%, 50=3.02% 00:36:27.384 cpu : usr=0.59%, sys=1.68%, ctx=529, majf=0, minf=1 00:36:27.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:27.384 job2: (groupid=0, jobs=1): err= 0: pid=236413: Tue Nov 19 11:30:35 2024 00:36:27.384 read: IOPS=15, BW=63.2KiB/s (64.8kB/s)(64.0KiB/1012msec) 00:36:27.384 slat (nsec): min=25530, max=25976, avg=25739.38, stdev=139.66 00:36:27.384 clat (usec): min=41013, max=42026, avg=41875.16, stdev=261.49 00:36:27.384 lat (usec): min=41039, max=42052, avg=41900.90, stdev=261.49 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:36:27.384 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:27.384 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:27.384 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:27.384 | 99.99th=[42206] 00:36:27.384 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:36:27.384 slat (nsec): min=9616, max=67723, avg=28284.32, stdev=8949.61 00:36:27.384 clat (usec): min=286, max=927, avg=631.45, stdev=117.66 00:36:27.384 lat (usec): min=300, max=958, avg=659.74, stdev=121.65 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[ 363], 5.00th=[ 412], 10.00th=[ 465], 20.00th=[ 519], 00:36:27.384 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 685], 00:36:27.384 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:36:27.384 | 99.00th=[ 865], 99.50th=[ 873], 99.90th=[ 930], 99.95th=[ 930], 00:36:27.384 | 99.99th=[ 930] 00:36:27.384 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:36:27.384 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:27.384 lat (usec) : 500=15.91%, 750=67.05%, 1000=14.02% 00:36:27.384 lat (msec) : 50=3.03% 00:36:27.384 cpu : usr=0.79%, sys=1.38%, ctx=529, majf=0, minf=1 00:36:27.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:27.384 job3: (groupid=0, jobs=1): err= 0: pid=236414: Tue Nov 19 11:30:35 2024 00:36:27.384 read: IOPS=180, BW=723KiB/s (741kB/s)(724KiB/1001msec) 00:36:27.384 slat (nsec): min=7299, max=57994, avg=26905.36, stdev=3968.69 00:36:27.384 clat (usec): min=936, max=41367, avg=3742.54, stdev=9986.27 00:36:27.384 lat (usec): min=960, max=41394, avg=3769.45, stdev=9986.20 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[ 947], 5.00th=[ 1029], 10.00th=[ 1037], 20.00th=[ 1057], 00:36:27.384 | 30.00th=[ 1074], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1090], 00:36:27.384 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[41157], 00:36:27.384 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:27.384 | 99.99th=[41157] 00:36:27.384 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:36:27.384 slat (nsec): min=8916, max=55625, avg=28009.31, stdev=10001.01 00:36:27.384 clat (usec): min=257, max=875, avg=584.03, stdev=120.72 00:36:27.384 lat (usec): min=267, max=907, avg=612.03, stdev=127.16 00:36:27.384 clat percentiles (usec): 00:36:27.384 | 1.00th=[ 293], 5.00th=[ 351], 10.00th=[ 383], 20.00th=[ 486], 00:36:27.384 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 635], 00:36:27.384 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 750], 00:36:27.384 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 873], 99.95th=[ 873], 00:36:27.384 | 99.99th=[ 873] 00:36:27.384 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:36:27.384 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:27.384 lat (usec) : 500=16.59%, 750=53.68%, 1000=4.18% 00:36:27.384 lat (msec) : 2=23.81%, 50=1.73% 00:36:27.384 cpu : usr=1.80%, sys=2.10%, ctx=693, majf=0, minf=1 00:36:27.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.384 issued rwts: total=181,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:27.384 00:36:27.384 Run status group 0 (all jobs): 00:36:27.384 READ: bw=897KiB/s (918kB/s), 62.4KiB/s-723KiB/s (63.9kB/s-741kB/s), io=920KiB (942kB), run=1001-1026msec 00:36:27.384 WRITE: bw=7984KiB/s (8176kB/s), 1996KiB/s-2046KiB/s (2044kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1026msec 00:36:27.384 00:36:27.384 Disk stats (read/write): 00:36:27.384 nvme0n1: ios=34/512, merge=0/0, ticks=504/308, in_queue=812, util=83.07% 00:36:27.384 nvme0n2: ios=55/512, merge=0/0, ticks=557/315, in_queue=872, util=90.22% 00:36:27.384 nvme0n3: ios=10/512, merge=0/0, ticks=418/319, in_queue=737, util=86.54% 00:36:27.384 nvme0n4: ios=128/512, merge=0/0, ticks=407/246, in_queue=653, util=88.81% 00:36:27.384 11:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:27.384 [global] 00:36:27.384 thread=1 00:36:27.384 invalidate=1 00:36:27.384 rw=write 00:36:27.384 time_based=1 00:36:27.384 runtime=1 00:36:27.384 ioengine=libaio 00:36:27.384 direct=1 00:36:27.384 bs=4096 00:36:27.384 iodepth=128 00:36:27.384 norandommap=0 00:36:27.384 numjobs=1 00:36:27.384 00:36:27.384 verify_dump=1 00:36:27.384 verify_backlog=512 00:36:27.384 verify_state_save=0 00:36:27.384 do_verify=1 00:36:27.384 verify=crc32c-intel 00:36:27.384 [job0] 00:36:27.384 filename=/dev/nvme0n1 00:36:27.384 [job1] 00:36:27.384 filename=/dev/nvme0n2 00:36:27.384 [job2] 00:36:27.384 filename=/dev/nvme0n3 00:36:27.384 [job3] 00:36:27.384 filename=/dev/nvme0n4 00:36:27.665 Could not set queue depth (nvme0n1) 00:36:27.665 Could not set queue depth (nvme0n2) 00:36:27.665 Could not set queue depth (nvme0n3) 00:36:27.665 Could not set queue depth (nvme0n4) 00:36:27.929 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:27.929 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:27.929 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:27.929 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:27.929 fio-3.35 00:36:27.929 Starting 4 threads 00:36:29.316 00:36:29.316 job0: (groupid=0, jobs=1): err= 0: pid=236940: Tue Nov 19 11:30:37 2024 00:36:29.316 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:36:29.316 slat (nsec): min=894, max=31477k, avg=150584.28, stdev=1367381.22 00:36:29.316 clat (usec): min=1386, max=66845, avg=20301.55, stdev=12140.51 00:36:29.316 lat (usec): min=1394, max=66873, avg=20452.13, stdev=12245.58 00:36:29.316 clat percentiles (usec): 00:36:29.316 | 1.00th=[ 1844], 5.00th=[ 4883], 10.00th=[ 6783], 20.00th=[ 8291], 00:36:29.316 | 30.00th=[12649], 40.00th=[16057], 50.00th=[19530], 60.00th=[20579], 00:36:29.316 | 70.00th=[25822], 80.00th=[28705], 90.00th=[38011], 95.00th=[42730], 00:36:29.316 | 99.00th=[56886], 99.50th=[59507], 99.90th=[59507], 99.95th=[60031], 00:36:29.316 | 99.99th=[66847] 00:36:29.316 write: IOPS=3245, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1004msec); 0 zone resets 00:36:29.316 slat (nsec): min=1652, max=19653k, avg=143244.15, stdev=1080581.70 00:36:29.316 clat (usec): min=1234, max=97530, avg=19959.54, stdev=16020.59 00:36:29.316 lat (usec): min=1245, max=97539, avg=20102.78, stdev=16111.68 00:36:29.316 clat percentiles (usec): 00:36:29.316 | 1.00th=[ 3064], 5.00th=[ 6456], 10.00th=[ 7504], 20.00th=[ 9896], 00:36:29.316 | 30.00th=[10945], 40.00th=[12649], 50.00th=[14746], 60.00th=[18744], 00:36:29.316 | 70.00th=[21890], 80.00th=[26346], 90.00th=[36963], 95.00th=[53740], 00:36:29.316 | 99.00th=[92799], 99.50th=[94897], 99.90th=[96994], 99.95th=[98042], 00:36:29.316 | 99.99th=[98042] 00:36:29.316 bw ( KiB/s): min= 8664, max=16384, per=15.08%, avg=12524.00, stdev=5458.86, samples=2 00:36:29.316 iops : min= 2166, max= 4096, avg=3131.00, stdev=1364.72, samples=2 00:36:29.316 lat (msec) : 2=0.76%, 4=2.13%, 10=19.83%, 20=38.85%, 50=34.27% 00:36:29.316 lat (msec) : 100=4.17% 00:36:29.316 cpu : usr=2.59%, sys=3.59%, ctx=239, majf=0, minf=2 00:36:29.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:36:29.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:29.316 issued rwts: total=3072,3258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:29.316 job1: (groupid=0, jobs=1): err= 0: pid=236941: Tue Nov 19 11:30:37 2024 00:36:29.316 read: IOPS=9055, BW=35.4MiB/s (37.1MB/s)(36.9MiB/1044msec) 00:36:29.316 slat (nsec): min=887, max=6377.6k, avg=50156.49, stdev=331640.85 00:36:29.316 clat (usec): min=3208, max=51371, avg=7156.39, stdev=4870.63 00:36:29.316 lat (usec): min=3210, max=51372, avg=7206.55, stdev=4880.05 00:36:29.316 clat percentiles (usec): 00:36:29.316 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5669], 00:36:29.316 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6325], 60.00th=[ 6783], 00:36:29.316 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8455], 95.00th=[ 9765], 00:36:29.316 | 99.00th=[44303], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:36:29.316 | 99.99th=[51119] 00:36:29.316 write: IOPS=9318, BW=36.4MiB/s (38.2MB/s)(38.0MiB/1044msec); 0 zone resets 00:36:29.316 slat (nsec): min=1530, max=5640.2k, avg=49610.56, stdev=294162.12 00:36:29.316 clat (usec): min=1766, max=23812, avg=6633.36, stdev=2174.70 00:36:29.316 lat (usec): min=1770, max=23815, avg=6682.97, stdev=2185.25 00:36:29.316 clat percentiles (usec): 00:36:29.316 | 1.00th=[ 3392], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 5473], 00:36:29.316 | 30.00th=[ 5866], 40.00th=[ 6194], 50.00th=[ 6456], 60.00th=[ 6849], 00:36:29.316 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7832], 95.00th=[ 8455], 00:36:29.316 | 99.00th=[19268], 99.50th=[22938], 99.90th=[23725], 99.95th=[23725], 00:36:29.316 | 99.99th=[23725] 00:36:29.316 bw ( KiB/s): min=36864, max=40960, per=46.84%, avg=38912.00, stdev=2896.31, samples=2 00:36:29.316 iops : min= 9216, max=10240, avg=9728.00, stdev=724.08, samples=2 00:36:29.316 lat (msec) : 2=0.03%, 4=3.17%, 10=93.04%, 20=2.60%, 50=1.08% 00:36:29.316 lat (msec) : 100=0.07% 00:36:29.316 cpu : usr=6.04%, sys=7.38%, ctx=820, majf=0, minf=1 00:36:29.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:36:29.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:29.316 issued rwts: total=9454,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:29.316 job2: (groupid=0, jobs=1): err= 0: pid=236942: Tue Nov 19 11:30:37 2024 00:36:29.316 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:36:29.316 slat (nsec): min=952, max=21557k, avg=121587.92, stdev=1125455.71 00:36:29.316 clat (usec): min=3172, max=59957, avg=16909.22, stdev=8622.00 00:36:29.316 lat (usec): min=3431, max=59964, avg=17030.81, stdev=8695.29 00:36:29.316 clat percentiles (usec): 00:36:29.316 | 1.00th=[ 5866], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[10814], 00:36:29.316 | 30.00th=[12125], 40.00th=[12387], 50.00th=[13435], 60.00th=[16057], 00:36:29.316 | 70.00th=[19530], 80.00th=[21365], 90.00th=[27919], 95.00th=[37487], 00:36:29.316 | 99.00th=[46400], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:36:29.316 | 99.99th=[60031] 00:36:29.316 write: IOPS=4089, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1005msec); 0 zone resets 00:36:29.316 slat (nsec): min=1615, max=21528k, avg=107205.17, stdev=898396.25 00:36:29.316 clat (usec): min=1253, max=41487, avg=14180.24, stdev=6986.72 00:36:29.316 lat (usec): min=1279, max=41519, avg=14287.45, stdev=7065.06 00:36:29.316 clat percentiles (usec): 00:36:29.316 | 1.00th=[ 2376], 5.00th=[ 4686], 10.00th=[ 6063], 20.00th=[ 8455], 00:36:29.316 | 30.00th=[ 9372], 40.00th=[10814], 50.00th=[11863], 60.00th=[15664], 00:36:29.316 | 70.00th=[18220], 80.00th=[20317], 90.00th=[23987], 95.00th=[26346], 00:36:29.316 | 99.00th=[32637], 99.50th=[32637], 99.90th=[32637], 99.95th=[36963], 00:36:29.316 | 99.99th=[41681] 00:36:29.316 bw ( KiB/s): min=12288, max=20480, per=19.72%, avg=16384.00, stdev=5792.62, samples=2 00:36:29.316 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:36:29.316 lat (msec) : 2=0.35%, 4=1.56%, 10=18.73%, 20=53.84%, 50=25.24% 00:36:29.316 lat (msec) : 100=0.28% 00:36:29.316 cpu : usr=2.29%, sys=5.28%, ctx=291, majf=0, minf=1 00:36:29.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:29.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:29.316 issued rwts: total=4096,4110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:29.316 job3: (groupid=0, jobs=1): err= 0: pid=236943: Tue Nov 19 11:30:37 2024 00:36:29.316 read: IOPS=4240, BW=16.6MiB/s (17.4MB/s)(17.3MiB/1045msec) 00:36:29.316 slat (nsec): min=1002, max=24928k, avg=115683.38, stdev=1134422.09 00:36:29.316 clat (usec): min=912, max=57253, avg=17138.88, stdev=11337.36 00:36:29.316 lat (usec): min=923, max=57277, avg=17254.56, stdev=11417.38 00:36:29.316 clat percentiles (usec): 00:36:29.316 | 1.00th=[ 1975], 5.00th=[ 2802], 10.00th=[ 6194], 20.00th=[ 7898], 00:36:29.316 | 30.00th=[ 9765], 40.00th=[12387], 50.00th=[15008], 60.00th=[17171], 00:36:29.316 | 70.00th=[19530], 80.00th=[24249], 90.00th=[34866], 95.00th=[39584], 00:36:29.316 | 99.00th=[52691], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:36:29.316 | 99.99th=[57410] 00:36:29.316 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:36:29.316 slat (nsec): min=1624, max=18911k, avg=82929.37, stdev=788426.44 00:36:29.316 clat (usec): min=849, max=42757, avg=12290.88, stdev=7086.84 00:36:29.316 lat (usec): min=861, max=42781, avg=12373.81, stdev=7156.33 00:36:29.316 clat percentiles (usec): 00:36:29.316 | 1.00th=[ 1762], 5.00th=[ 3523], 10.00th=[ 4293], 20.00th=[ 5932], 00:36:29.316 | 30.00th=[ 7570], 40.00th=[ 8848], 50.00th=[10421], 60.00th=[13304], 00:36:29.316 | 70.00th=[15926], 80.00th=[18744], 90.00th=[21890], 95.00th=[24511], 00:36:29.316 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[36963], 00:36:29.316 | 99.99th=[42730] 00:36:29.316 bw ( KiB/s): min=16384, max=20480, per=22.19%, avg=18432.00, stdev=2896.31, samples=2 00:36:29.316 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:36:29.316 lat (usec) : 1000=0.07% 00:36:29.316 lat (msec) : 2=1.34%, 4=5.14%, 10=32.40%, 20=41.64%, 50=18.50% 00:36:29.316 lat (msec) : 100=0.91% 00:36:29.316 cpu : usr=3.74%, sys=5.36%, ctx=250, majf=0, minf=1 00:36:29.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:29.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:29.317 issued rwts: total=4431,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:29.317 00:36:29.317 Run status group 0 (all jobs): 00:36:29.317 READ: bw=78.7MiB/s (82.5MB/s), 12.0MiB/s-35.4MiB/s (12.5MB/s-37.1MB/s), io=82.2MiB (86.2MB), run=1004-1045msec 00:36:29.317 WRITE: bw=81.1MiB/s (85.1MB/s), 12.7MiB/s-36.4MiB/s (13.3MB/s-38.2MB/s), io=84.8MiB (88.9MB), run=1004-1045msec 00:36:29.317 00:36:29.317 Disk stats (read/write): 00:36:29.317 nvme0n1: ios=2307/2560, merge=0/0, ticks=45878/45729, in_queue=91607, util=81.56% 00:36:29.317 nvme0n2: ios=9473/9728, merge=0/0, ticks=30950/30110, in_queue=61060, util=85.18% 00:36:29.317 nvme0n3: ios=3072/3582, merge=0/0, ticks=48238/45190, in_queue=93428, util=86.48% 00:36:29.317 nvme0n4: ios=4430/4608, merge=0/0, ticks=65828/53005, in_queue=118833, util=91.26% 00:36:29.317 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:29.317 [global] 00:36:29.317 thread=1 00:36:29.317 invalidate=1 00:36:29.317 rw=randwrite 00:36:29.317 time_based=1 00:36:29.317 runtime=1 00:36:29.317 ioengine=libaio 00:36:29.317 direct=1 00:36:29.317 bs=4096 00:36:29.317 iodepth=128 00:36:29.317 norandommap=0 00:36:29.317 numjobs=1 00:36:29.317 00:36:29.317 verify_dump=1 00:36:29.317 verify_backlog=512 00:36:29.317 verify_state_save=0 00:36:29.317 do_verify=1 00:36:29.317 verify=crc32c-intel 00:36:29.317 [job0] 00:36:29.317 filename=/dev/nvme0n1 00:36:29.317 [job1] 00:36:29.317 filename=/dev/nvme0n2 00:36:29.317 [job2] 00:36:29.317 filename=/dev/nvme0n3 00:36:29.317 [job3] 00:36:29.317 filename=/dev/nvme0n4 00:36:29.317 Could not set queue depth (nvme0n1) 00:36:29.317 Could not set queue depth (nvme0n2) 00:36:29.317 Could not set queue depth (nvme0n3) 00:36:29.317 Could not set queue depth (nvme0n4) 00:36:29.577 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:29.577 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:29.577 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:29.577 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:29.577 fio-3.35 00:36:29.577 Starting 4 threads 00:36:30.967 00:36:30.967 job0: (groupid=0, jobs=1): err= 0: pid=237465: Tue Nov 19 11:30:39 2024 00:36:30.967 read: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec) 00:36:30.967 slat (nsec): min=914, max=15930k, avg=59782.37, stdev=484647.90 00:36:30.967 clat (usec): min=2654, max=25586, avg=7976.40, stdev=2532.42 00:36:30.967 lat (usec): min=2661, max=25614, avg=8036.19, stdev=2554.60 00:36:30.967 clat percentiles (usec): 00:36:30.967 | 1.00th=[ 3785], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6325], 00:36:30.967 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7767], 00:36:30.967 | 70.00th=[ 8356], 80.00th=[ 9896], 90.00th=[11076], 95.00th=[12256], 00:36:30.967 | 99.00th=[17957], 99.50th=[21627], 99.90th=[22676], 99.95th=[22676], 00:36:30.967 | 99.99th=[25560] 00:36:30.967 write: IOPS=8678, BW=33.9MiB/s (35.5MB/s)(34.1MiB/1005msec); 0 zone resets 00:36:30.967 slat (nsec): min=1565, max=6928.2k, avg=50693.19, stdev=321693.26 00:36:30.967 clat (usec): min=1115, max=24820, avg=6651.06, stdev=1694.73 00:36:30.967 lat (usec): min=1126, max=24833, avg=6701.76, stdev=1707.64 00:36:30.967 clat percentiles (usec): 00:36:30.967 | 1.00th=[ 2409], 5.00th=[ 3916], 10.00th=[ 4293], 20.00th=[ 5080], 00:36:30.967 | 30.00th=[ 5997], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7111], 00:36:30.967 | 70.00th=[ 7308], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9372], 00:36:30.967 | 99.00th=[10683], 99.50th=[10814], 99.90th=[13173], 99.95th=[14615], 00:36:30.967 | 99.99th=[24773] 00:36:30.967 bw ( KiB/s): min=32768, max=36864, per=32.34%, avg=34816.00, stdev=2896.31, samples=2 00:36:30.968 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:36:30.968 lat (msec) : 2=0.15%, 4=3.62%, 10=85.24%, 20=10.55%, 50=0.45% 00:36:30.968 cpu : usr=4.48%, sys=7.87%, ctx=792, majf=0, minf=2 00:36:30.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:30.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:30.968 issued rwts: total=8704,8722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:30.968 job1: (groupid=0, jobs=1): err= 0: pid=237466: Tue Nov 19 11:30:39 2024 00:36:30.968 read: IOPS=8416, BW=32.9MiB/s (34.5MB/s)(33.0MiB/1003msec) 00:36:30.968 slat (nsec): min=926, max=9377.0k, avg=59886.96, stdev=478564.26 00:36:30.968 clat (usec): min=1692, max=28461, avg=8035.24, stdev=2490.52 00:36:30.968 lat (usec): min=2766, max=28495, avg=8095.13, stdev=2528.38 00:36:30.968 clat percentiles (usec): 00:36:30.968 | 1.00th=[ 4555], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6325], 00:36:30.968 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7439], 60.00th=[ 7832], 00:36:30.968 | 70.00th=[ 8356], 80.00th=[ 9372], 90.00th=[10814], 95.00th=[12125], 00:36:30.968 | 99.00th=[19268], 99.50th=[21365], 99.90th=[21627], 99.95th=[23200], 00:36:30.968 | 99.99th=[28443] 00:36:30.968 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:36:30.968 slat (nsec): min=1597, max=9911.5k, avg=51840.54, stdev=422611.36 00:36:30.968 clat (usec): min=1104, max=21545, avg=6819.83, stdev=2248.98 00:36:30.968 lat (usec): min=1114, max=21553, avg=6871.67, stdev=2265.07 00:36:30.968 clat percentiles (usec): 00:36:30.968 | 1.00th=[ 3195], 5.00th=[ 3982], 10.00th=[ 4228], 20.00th=[ 4948], 00:36:30.968 | 30.00th=[ 5669], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 6915], 00:36:30.968 | 70.00th=[ 7439], 80.00th=[ 8225], 90.00th=[ 9503], 95.00th=[10421], 00:36:30.968 | 99.00th=[12125], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:36:30.968 | 99.99th=[21627] 00:36:30.968 bw ( KiB/s): min=34136, max=35496, per=32.34%, avg=34816.00, stdev=961.67, samples=2 00:36:30.968 iops : min= 8534, max= 8874, avg=8704.00, stdev=240.42, samples=2 00:36:30.968 lat (msec) : 2=0.05%, 4=3.04%, 10=84.96%, 20=11.27%, 50=0.67% 00:36:30.968 cpu : usr=5.99%, sys=8.08%, ctx=411, majf=0, minf=1 00:36:30.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:30.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:30.968 issued rwts: total=8442,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:30.968 job2: (groupid=0, jobs=1): err= 0: pid=237467: Tue Nov 19 11:30:39 2024 00:36:30.968 read: IOPS=4141, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1009msec) 00:36:30.968 slat (nsec): min=1013, max=12728k, avg=92969.37, stdev=735683.62 00:36:30.968 clat (usec): min=2264, max=40326, avg=12161.27, stdev=4388.62 00:36:30.968 lat (usec): min=2267, max=52933, avg=12254.23, stdev=4452.77 00:36:30.968 clat percentiles (usec): 00:36:30.968 | 1.00th=[ 7373], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8455], 00:36:30.968 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[11338], 60.00th=[12780], 00:36:30.968 | 70.00th=[14091], 80.00th=[15664], 90.00th=[16909], 95.00th=[19006], 00:36:30.968 | 99.00th=[27919], 99.50th=[29754], 99.90th=[40109], 99.95th=[40109], 00:36:30.968 | 99.99th=[40109] 00:36:30.968 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:36:30.968 slat (nsec): min=1698, max=16262k, avg=126410.54, stdev=827244.58 00:36:30.968 clat (usec): min=822, max=95835, avg=16690.23, stdev=16107.45 00:36:30.968 lat (usec): min=831, max=95849, avg=16816.64, stdev=16207.58 00:36:30.968 clat percentiles (usec): 00:36:30.968 | 1.00th=[ 5342], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7832], 00:36:30.968 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 9896], 60.00th=[13829], 00:36:30.968 | 70.00th=[16450], 80.00th=[18744], 90.00th=[30016], 95.00th=[57410], 00:36:30.968 | 99.00th=[86508], 99.50th=[88605], 99.90th=[95945], 99.95th=[95945], 00:36:30.968 | 99.99th=[95945] 00:36:30.968 bw ( KiB/s): min=15280, max=21224, per=16.96%, avg=18252.00, stdev=4203.04, samples=2 00:36:30.968 iops : min= 3820, max= 5306, avg=4563.00, stdev=1050.76, samples=2 00:36:30.968 lat (usec) : 1000=0.03% 00:36:30.968 lat (msec) : 2=0.08%, 4=0.16%, 10=46.03%, 20=42.61%, 50=7.76% 00:36:30.968 lat (msec) : 100=3.32% 00:36:30.968 cpu : usr=2.88%, sys=4.86%, ctx=336, majf=0, minf=1 00:36:30.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:30.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:30.968 issued rwts: total=4179,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:30.968 job3: (groupid=0, jobs=1): err= 0: pid=237468: Tue Nov 19 11:30:39 2024 00:36:30.968 read: IOPS=5061, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1009msec) 00:36:30.968 slat (nsec): min=1053, max=18547k, avg=98783.18, stdev=851421.61 00:36:30.968 clat (usec): min=1187, max=48347, avg=13725.36, stdev=5781.52 00:36:30.968 lat (usec): min=4681, max=49566, avg=13824.14, stdev=5849.30 00:36:30.968 clat percentiles (usec): 00:36:30.968 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9241], 00:36:30.968 | 30.00th=[ 9634], 40.00th=[11207], 50.00th=[11731], 60.00th=[12780], 00:36:30.968 | 70.00th=[15533], 80.00th=[18744], 90.00th=[20579], 95.00th=[21627], 00:36:30.968 | 99.00th=[37487], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:36:30.968 | 99.99th=[48497] 00:36:30.968 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:36:30.968 slat (nsec): min=1600, max=12411k, avg=76639.67, stdev=647607.24 00:36:30.968 clat (usec): min=555, max=49543, avg=11317.18, stdev=4509.43 00:36:30.968 lat (usec): min=937, max=49545, avg=11393.82, stdev=4547.17 00:36:30.968 clat percentiles (usec): 00:36:30.968 | 1.00th=[ 2376], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 8160], 00:36:30.968 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10814], 60.00th=[11600], 00:36:30.968 | 70.00th=[12911], 80.00th=[14222], 90.00th=[16319], 95.00th=[17957], 00:36:30.968 | 99.00th=[26346], 99.50th=[35390], 99.90th=[44827], 99.95th=[44827], 00:36:30.968 | 99.99th=[49546] 00:36:30.968 bw ( KiB/s): min=18568, max=22392, per=19.03%, avg=20480.00, stdev=2703.98, samples=2 00:36:30.968 iops : min= 4642, max= 5598, avg=5120.00, stdev=675.99, samples=2 00:36:30.968 lat (usec) : 750=0.02%, 1000=0.01% 00:36:30.968 lat (msec) : 2=0.44%, 4=0.30%, 10=38.24%, 20=51.60%, 50=9.39% 00:36:30.968 cpu : usr=3.37%, sys=5.65%, ctx=271, majf=0, minf=2 00:36:30.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:30.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:30.968 issued rwts: total=5107,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:30.968 00:36:30.968 Run status group 0 (all jobs): 00:36:30.968 READ: bw=102MiB/s (107MB/s), 16.2MiB/s-33.8MiB/s (17.0MB/s-35.5MB/s), io=103MiB (108MB), run=1003-1009msec 00:36:30.968 WRITE: bw=105MiB/s (110MB/s), 17.8MiB/s-33.9MiB/s (18.7MB/s-35.5MB/s), io=106MiB (111MB), run=1003-1009msec 00:36:30.968 00:36:30.968 Disk stats (read/write): 00:36:30.968 nvme0n1: ios=7203/7175, merge=0/0, ticks=54539/45709, in_queue=100248, util=95.69% 00:36:30.968 nvme0n2: ios=6901/7168, merge=0/0, ticks=53526/46962, in_queue=100488, util=87.96% 00:36:30.968 nvme0n3: ios=3603/3647, merge=0/0, ticks=42827/56878, in_queue=99705, util=96.62% 00:36:30.968 nvme0n4: ios=4348/4608, merge=0/0, ticks=53622/46390, in_queue=100012, util=89.41% 00:36:30.968 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:30.968 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=237566 00:36:30.968 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:30.968 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:30.968 [global] 00:36:30.968 thread=1 00:36:30.968 invalidate=1 00:36:30.969 rw=read 00:36:30.969 time_based=1 00:36:30.969 runtime=10 00:36:30.969 ioengine=libaio 00:36:30.969 direct=1 00:36:30.969 bs=4096 00:36:30.969 iodepth=1 00:36:30.969 norandommap=1 00:36:30.969 numjobs=1 00:36:30.969 00:36:30.969 [job0] 00:36:30.969 filename=/dev/nvme0n1 00:36:30.969 [job1] 00:36:30.969 filename=/dev/nvme0n2 00:36:30.969 [job2] 00:36:30.969 filename=/dev/nvme0n3 00:36:30.969 [job3] 00:36:30.969 filename=/dev/nvme0n4 00:36:30.969 Could not set queue depth (nvme0n1) 00:36:30.969 Could not set queue depth (nvme0n2) 00:36:30.969 Could not set queue depth (nvme0n3) 00:36:30.969 Could not set queue depth (nvme0n4) 00:36:31.230 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:31.230 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:31.230 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:31.230 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:31.230 fio-3.35 00:36:31.230 Starting 4 threads 00:36:33.779 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:34.041 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:34.041 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8826880, buflen=4096 00:36:34.041 fio: pid=237977, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:34.302 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11444224, buflen=4096 00:36:34.302 fio: pid=237971, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:34.302 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:34.302 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:34.302 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1286144, buflen=4096 00:36:34.302 fio: pid=237932, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:34.302 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:34.302 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:34.564 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:34.564 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:34.564 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=565248, buflen=4096 00:36:34.564 fio: pid=237947, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:34.564 00:36:34.564 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=237932: Tue Nov 19 11:30:42 2024 00:36:34.564 read: IOPS=107, BW=430KiB/s (440kB/s)(1256KiB/2923msec) 00:36:34.564 slat (usec): min=3, max=31275, avg=160.85, stdev=1859.45 00:36:34.564 clat (usec): min=461, max=42992, avg=9059.26, stdev=16369.13 00:36:34.564 lat (usec): min=503, max=46953, avg=9220.54, stdev=16449.03 00:36:34.564 clat percentiles (usec): 00:36:34.564 | 1.00th=[ 570], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 725], 00:36:34.564 | 30.00th=[ 758], 40.00th=[ 799], 50.00th=[ 857], 60.00th=[ 914], 00:36:34.564 | 70.00th=[ 947], 80.00th=[40633], 90.00th=[41681], 95.00th=[42206], 00:36:34.564 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:36:34.564 | 99.99th=[43254] 00:36:34.564 bw ( KiB/s): min= 96, max= 560, per=2.71%, avg=188.80, stdev=207.51, samples=5 00:36:34.564 iops : min= 24, max= 140, avg=47.20, stdev=51.88, samples=5 00:36:34.564 lat (usec) : 500=0.32%, 750=26.98%, 1000=49.52% 00:36:34.564 lat (msec) : 2=2.22%, 4=0.32%, 20=0.32%, 50=20.00% 00:36:34.564 cpu : usr=0.03%, sys=0.27%, ctx=321, majf=0, minf=1 00:36:34.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.564 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.564 issued rwts: total=315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:34.564 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=237947: Tue Nov 19 11:30:42 2024 00:36:34.564 read: IOPS=44, BW=177KiB/s (181kB/s)(552KiB/3119msec) 00:36:34.564 slat (usec): min=3, max=18891, avg=152.87, stdev=1600.95 00:36:34.564 clat (usec): min=498, max=42112, avg=22291.86, stdev=20125.42 00:36:34.564 lat (usec): min=510, max=60039, avg=22445.64, stdev=20323.93 00:36:34.564 clat percentiles (usec): 00:36:34.564 | 1.00th=[ 510], 5.00th=[ 586], 10.00th=[ 676], 20.00th=[ 709], 00:36:34.564 | 30.00th=[ 766], 40.00th=[ 889], 50.00th=[40633], 60.00th=[41157], 00:36:34.564 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:34.564 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:34.564 | 99.99th=[42206] 00:36:34.564 bw ( KiB/s): min= 94, max= 568, per=2.58%, avg=179.67, stdev=190.42, samples=6 00:36:34.564 iops : min= 23, max= 142, avg=44.83, stdev=47.65, samples=6 00:36:34.564 lat (usec) : 500=0.72%, 750=26.62%, 1000=16.55% 00:36:34.564 lat (msec) : 2=2.16%, 50=53.24% 00:36:34.564 cpu : usr=0.00%, sys=0.16%, ctx=140, majf=0, minf=2 00:36:34.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.564 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.564 issued rwts: total=139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:34.564 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=237971: Tue Nov 19 11:30:42 2024 00:36:34.564 read: IOPS=1017, BW=4067KiB/s (4165kB/s)(10.9MiB/2748msec) 00:36:34.564 slat (usec): min=7, max=13115, avg=33.94, stdev=280.12 00:36:34.564 clat (usec): min=233, max=3571, avg=933.34, stdev=200.95 00:36:34.564 lat (usec): min=247, max=14159, avg=967.28, stdev=346.95 00:36:34.564 clat percentiles (usec): 00:36:34.564 | 1.00th=[ 441], 5.00th=[ 545], 10.00th=[ 635], 20.00th=[ 766], 00:36:34.564 | 30.00th=[ 857], 40.00th=[ 922], 50.00th=[ 971], 60.00th=[ 1029], 00:36:34.564 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:36:34.564 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1369], 99.95th=[ 1942], 00:36:34.564 | 99.99th=[ 3556] 00:36:34.564 bw ( KiB/s): min= 4048, max= 4208, per=59.64%, avg=4131.20, stdev=70.79, samples=5 00:36:34.564 iops : min= 1012, max= 1052, avg=1032.80, stdev=17.70, samples=5 00:36:34.564 lat (usec) : 250=0.07%, 500=3.15%, 750=15.71%, 1000=36.06% 00:36:34.564 lat (msec) : 2=44.94%, 4=0.04% 00:36:34.564 cpu : usr=1.09%, sys=3.17%, ctx=2797, majf=0, minf=2 00:36:34.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.564 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.564 issued rwts: total=2795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:34.564 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=237977: Tue Nov 19 11:30:42 2024 00:36:34.564 read: IOPS=837, BW=3349KiB/s (3429kB/s)(8620KiB/2574msec) 00:36:34.564 slat (nsec): min=7843, max=64322, avg=26988.65, stdev=3453.23 00:36:34.564 clat (usec): min=728, max=1871, avg=1145.63, stdev=96.07 00:36:34.564 lat (usec): min=754, max=1897, avg=1172.62, stdev=96.05 00:36:34.564 clat percentiles (usec): 00:36:34.564 | 1.00th=[ 832], 5.00th=[ 963], 10.00th=[ 1020], 20.00th=[ 1090], 00:36:34.564 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:36:34.564 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1270], 00:36:34.564 | 99.00th=[ 1319], 99.50th=[ 1352], 99.90th=[ 1401], 99.95th=[ 1418], 00:36:34.564 | 99.99th=[ 1876] 00:36:34.564 bw ( KiB/s): min= 3368, max= 3416, per=49.03%, avg=3396.80, stdev=18.42, samples=5 00:36:34.565 iops : min= 842, max= 854, avg=849.20, stdev= 4.60, samples=5 00:36:34.565 lat (usec) : 750=0.05%, 1000=7.61% 00:36:34.565 lat (msec) : 2=92.30% 00:36:34.565 cpu : usr=0.82%, sys=2.72%, ctx=2156, majf=0, minf=2 00:36:34.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.565 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:34.565 00:36:34.565 Run status group 0 (all jobs): 00:36:34.565 READ: bw=6927KiB/s (7093kB/s), 177KiB/s-4067KiB/s (181kB/s-4165kB/s), io=21.1MiB (22.1MB), run=2574-3119msec 00:36:34.565 00:36:34.565 Disk stats (read/write): 00:36:34.565 nvme0n1: ios=214/0, merge=0/0, ticks=2754/0, in_queue=2754, util=92.22% 00:36:34.565 nvme0n2: ios=136/0, merge=0/0, ticks=2995/0, in_queue=2995, util=94.53% 00:36:34.565 nvme0n3: ios=2636/0, merge=0/0, ticks=2377/0, in_queue=2377, util=95.95% 00:36:34.565 nvme0n4: ios=2156/0, merge=0/0, ticks=2419/0, in_queue=2419, util=96.19% 00:36:34.826 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:34.826 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:34.826 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:34.826 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:35.087 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:35.087 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:35.348 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:35.348 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 237566 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:35.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:35.610 nvmf hotplug test: fio failed as expected 00:36:35.610 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:35.872 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:35.872 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:35.872 rmmod nvme_tcp 00:36:35.872 rmmod nvme_fabrics 00:36:35.872 rmmod nvme_keyring 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 234317 ']' 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 234317 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 234317 ']' 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 234317 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234317 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234317' 00:36:35.872 killing process with pid 234317 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 234317 00:36:35.872 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 234317 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:36.134 11:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.142 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:38.142 00:36:38.142 real 0m29.077s 00:36:38.142 user 2m6.080s 00:36:38.142 sys 0m13.043s 00:36:38.142 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.142 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:38.142 ************************************ 00:36:38.142 END TEST nvmf_fio_target 00:36:38.142 ************************************ 00:36:38.142 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:38.142 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:38.142 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:38.142 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:38.142 ************************************ 00:36:38.142 START TEST nvmf_bdevio 00:36:38.142 ************************************ 00:36:38.142 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:38.142 * Looking for test storage... 00:36:38.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:38.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.404 --rc genhtml_branch_coverage=1 00:36:38.404 --rc genhtml_function_coverage=1 00:36:38.404 --rc genhtml_legend=1 00:36:38.404 --rc geninfo_all_blocks=1 00:36:38.404 --rc geninfo_unexecuted_blocks=1 00:36:38.404 00:36:38.404 ' 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:38.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.404 --rc genhtml_branch_coverage=1 00:36:38.404 --rc genhtml_function_coverage=1 00:36:38.404 --rc genhtml_legend=1 00:36:38.404 --rc geninfo_all_blocks=1 00:36:38.404 --rc geninfo_unexecuted_blocks=1 00:36:38.404 00:36:38.404 ' 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:38.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.404 --rc genhtml_branch_coverage=1 00:36:38.404 --rc genhtml_function_coverage=1 00:36:38.404 --rc genhtml_legend=1 00:36:38.404 --rc geninfo_all_blocks=1 00:36:38.404 --rc geninfo_unexecuted_blocks=1 00:36:38.404 00:36:38.404 ' 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:38.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.404 --rc genhtml_branch_coverage=1 00:36:38.404 --rc genhtml_function_coverage=1 00:36:38.404 --rc genhtml_legend=1 00:36:38.404 --rc geninfo_all_blocks=1 00:36:38.404 --rc geninfo_unexecuted_blocks=1 00:36:38.404 00:36:38.404 ' 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.404 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:38.405 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:46.548 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:46.549 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:46.549 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:46.549 Found net devices under 0000:31:00.0: cvl_0_0 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:46.549 Found net devices under 0000:31:00.1: cvl_0_1 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:46.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:36:46.549 00:36:46.549 --- 10.0.0.2 ping statistics --- 00:36:46.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.549 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:46.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:36:46.549 00:36:46.549 --- 10.0.0.1 ping statistics --- 00:36:46.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.549 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:46.549 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=243370 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 243370 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 243370 ']' 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:46.550 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:46.550 [2024-11-19 11:30:54.736720] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:46.550 [2024-11-19 11:30:54.737891] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:36:46.550 [2024-11-19 11:30:54.737944] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:46.550 [2024-11-19 11:30:54.846656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:46.550 [2024-11-19 11:30:54.896542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:46.550 [2024-11-19 11:30:54.896593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:46.550 [2024-11-19 11:30:54.896602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:46.550 [2024-11-19 11:30:54.896609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:46.550 [2024-11-19 11:30:54.896615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:46.811 [2024-11-19 11:30:54.898576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:46.811 [2024-11-19 11:30:54.898711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:46.811 [2024-11-19 11:30:54.898892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:46.811 [2024-11-19 11:30:54.898897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:46.811 [2024-11-19 11:30:54.981592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:46.811 [2024-11-19 11:30:54.982884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:46.811 [2024-11-19 11:30:54.982997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:46.811 [2024-11-19 11:30:54.983754] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:46.811 [2024-11-19 11:30:54.983797] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:47.383 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:47.383 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:47.383 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:47.383 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:47.383 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.383 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.383 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.384 [2024-11-19 11:30:55.592156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.384 Malloc0 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:47.384 [2024-11-19 11:30:55.680390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.384 { 00:36:47.384 "params": { 00:36:47.384 "name": "Nvme$subsystem", 00:36:47.384 "trtype": "$TEST_TRANSPORT", 00:36:47.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.384 "adrfam": "ipv4", 00:36:47.384 "trsvcid": "$NVMF_PORT", 00:36:47.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.384 "hdgst": ${hdgst:-false}, 00:36:47.384 "ddgst": ${ddgst:-false} 00:36:47.384 }, 00:36:47.384 "method": "bdev_nvme_attach_controller" 00:36:47.384 } 00:36:47.384 EOF 00:36:47.384 )") 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:47.384 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:47.384 "params": { 00:36:47.384 "name": "Nvme1", 00:36:47.384 "trtype": "tcp", 00:36:47.384 "traddr": "10.0.0.2", 00:36:47.384 "adrfam": "ipv4", 00:36:47.384 "trsvcid": "4420", 00:36:47.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:47.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:47.384 "hdgst": false, 00:36:47.384 "ddgst": false 00:36:47.384 }, 00:36:47.384 "method": "bdev_nvme_attach_controller" 00:36:47.384 }' 00:36:47.644 [2024-11-19 11:30:55.736801] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:36:47.644 [2024-11-19 11:30:55.736871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243518 ] 00:36:47.644 [2024-11-19 11:30:55.821238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:47.644 [2024-11-19 11:30:55.865895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.644 [2024-11-19 11:30:55.865970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:47.644 [2024-11-19 11:30:55.865974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.904 I/O targets: 00:36:47.904 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:47.904 00:36:47.904 00:36:47.904 CUnit - A unit testing framework for C - Version 2.1-3 00:36:47.904 http://cunit.sourceforge.net/ 00:36:47.904 00:36:47.904 00:36:47.904 Suite: bdevio tests on: Nvme1n1 00:36:47.904 Test: blockdev write read block ...passed 00:36:48.164 Test: blockdev write zeroes read block ...passed 00:36:48.164 Test: blockdev write zeroes read no split ...passed 00:36:48.164 Test: blockdev write zeroes read split ...passed 00:36:48.164 Test: blockdev write zeroes read split partial ...passed 00:36:48.164 Test: blockdev reset ...[2024-11-19 11:30:56.380263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:48.164 [2024-11-19 11:30:56.380327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec94b0 (9): Bad file descriptor 00:36:48.164 [2024-11-19 11:30:56.386496] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:48.164 passed 00:36:48.165 Test: blockdev write read 8 blocks ...passed 00:36:48.165 Test: blockdev write read size > 128k ...passed 00:36:48.165 Test: blockdev write read invalid size ...passed 00:36:48.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:48.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:48.165 Test: blockdev write read max offset ...passed 00:36:48.425 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:48.425 Test: blockdev writev readv 8 blocks ...passed 00:36:48.425 Test: blockdev writev readv 30 x 1block ...passed 00:36:48.425 Test: blockdev writev readv block ...passed 00:36:48.425 Test: blockdev writev readv size > 128k ...passed 00:36:48.425 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:48.425 Test: blockdev comparev and writev ...[2024-11-19 11:30:56.689602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.425 [2024-11-19 11:30:56.689625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.425 [2024-11-19 11:30:56.689636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.425 [2024-11-19 11:30:56.689642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.425 [2024-11-19 11:30:56.690078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.425 [2024-11-19 11:30:56.690086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:48.425 [2024-11-19 11:30:56.690096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.425 [2024-11-19 11:30:56.690101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:48.425 [2024-11-19 11:30:56.690500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.425 [2024-11-19 11:30:56.690507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:48.425 [2024-11-19 11:30:56.690517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.425 [2024-11-19 11:30:56.690523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:48.425 [2024-11-19 11:30:56.690934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.425 [2024-11-19 11:30:56.690942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:48.425 [2024-11-19 11:30:56.690951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:48.425 [2024-11-19 11:30:56.690961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:48.425 passed 00:36:48.425 Test: blockdev nvme passthru rw ...passed 00:36:48.425 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:30:56.775299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:48.425 [2024-11-19 11:30:56.775310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:48.425 [2024-11-19 11:30:56.775529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:48.425 [2024-11-19 11:30:56.775536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:48.686 [2024-11-19 11:30:56.775800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:48.686 [2024-11-19 11:30:56.775807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:48.686 [2024-11-19 11:30:56.776025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:48.686 [2024-11-19 11:30:56.776032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:48.686 passed 00:36:48.686 Test: blockdev nvme admin passthru ...passed 00:36:48.686 Test: blockdev copy ...passed 00:36:48.686 00:36:48.686 Run Summary: Type Total Ran Passed Failed Inactive 00:36:48.686 suites 1 1 n/a 0 0 00:36:48.686 tests 23 23 23 0 0 00:36:48.686 asserts 152 152 152 0 n/a 00:36:48.686 00:36:48.686 Elapsed time = 1.332 seconds 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:48.686 rmmod nvme_tcp 00:36:48.686 rmmod nvme_fabrics 00:36:48.686 rmmod nvme_keyring 00:36:48.686 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 243370 ']' 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 243370 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 243370 ']' 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 243370 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.686 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 243370 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 243370' 00:36:48.948 killing process with pid 243370 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 243370 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 243370 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:48.948 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.493 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:51.493 00:36:51.493 real 0m12.939s 00:36:51.493 user 0m10.584s 00:36:51.493 sys 0m6.992s 00:36:51.493 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.493 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:51.493 ************************************ 00:36:51.493 END TEST nvmf_bdevio 00:36:51.493 ************************************ 00:36:51.493 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:51.493 00:36:51.493 real 5m10.992s 00:36:51.493 user 10m11.409s 00:36:51.493 sys 2m11.828s 00:36:51.493 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.493 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:51.493 ************************************ 00:36:51.493 END TEST nvmf_target_core_interrupt_mode 00:36:51.493 ************************************ 00:36:51.493 11:30:59 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:51.493 11:30:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:51.493 11:30:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.493 11:30:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.493 ************************************ 00:36:51.493 START TEST nvmf_interrupt 00:36:51.493 ************************************ 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:51.493 * Looking for test storage... 00:36:51.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:51.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.493 --rc genhtml_branch_coverage=1 00:36:51.493 --rc genhtml_function_coverage=1 00:36:51.493 --rc genhtml_legend=1 00:36:51.493 --rc geninfo_all_blocks=1 00:36:51.493 --rc geninfo_unexecuted_blocks=1 00:36:51.493 00:36:51.493 ' 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:51.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.493 --rc genhtml_branch_coverage=1 00:36:51.493 --rc genhtml_function_coverage=1 00:36:51.493 --rc genhtml_legend=1 00:36:51.493 --rc geninfo_all_blocks=1 00:36:51.493 --rc geninfo_unexecuted_blocks=1 00:36:51.493 00:36:51.493 ' 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:51.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.493 --rc genhtml_branch_coverage=1 00:36:51.493 --rc genhtml_function_coverage=1 00:36:51.493 --rc genhtml_legend=1 00:36:51.493 --rc geninfo_all_blocks=1 00:36:51.493 --rc geninfo_unexecuted_blocks=1 00:36:51.493 00:36:51.493 ' 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:51.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.493 --rc genhtml_branch_coverage=1 00:36:51.493 --rc genhtml_function_coverage=1 00:36:51.493 --rc genhtml_legend=1 00:36:51.493 --rc geninfo_all_blocks=1 00:36:51.493 --rc geninfo_unexecuted_blocks=1 00:36:51.493 00:36:51.493 ' 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.493 11:30:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:51.494 11:30:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:59.638 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:59.638 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:59.638 Found net devices under 0000:31:00.0: cvl_0_0 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:59.638 Found net devices under 0000:31:00.1: cvl_0_1 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:59.638 11:31:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:59.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:59.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:36:59.899 00:36:59.899 --- 10.0.0.2 ping statistics --- 00:36:59.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.899 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:59.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:59.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:36:59.899 00:36:59.899 --- 10.0.0.1 ping statistics --- 00:36:59.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.899 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=248432 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 248432 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 248432 ']' 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.899 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:59.899 [2024-11-19 11:31:08.178080] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:59.899 [2024-11-19 11:31:08.179056] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:36:59.899 [2024-11-19 11:31:08.179093] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.160 [2024-11-19 11:31:08.266383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:00.160 [2024-11-19 11:31:08.301113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.160 [2024-11-19 11:31:08.301145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.160 [2024-11-19 11:31:08.301154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.160 [2024-11-19 11:31:08.301160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.160 [2024-11-19 11:31:08.301166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.160 [2024-11-19 11:31:08.304880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.160 [2024-11-19 11:31:08.304904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.160 [2024-11-19 11:31:08.359607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:00.160 [2024-11-19 11:31:08.360039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:00.160 [2024-11-19 11:31:08.360102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:00.160 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:00.160 5000+0 records in 00:37:00.160 5000+0 records out 00:37:00.160 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0185558 s, 552 MB/s 00:37:00.161 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:00.161 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.161 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.161 AIO0 00:37:00.161 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.161 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:00.161 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.161 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.161 [2024-11-19 11:31:08.509380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:00.422 [2024-11-19 11:31:08.549855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 248432 0 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 248432 0 idle 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=248432 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248432 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0' 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248432 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 248432 1 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 248432 1 idle 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=248432 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:00.422 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248439 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248439 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=248759 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:00.682 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 248432 0 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 248432 0 busy 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=248432 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:00.683 11:31:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248432 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0' 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248432 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:00.943 11:31:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:37:01.882 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:37:01.882 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:01.882 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:01.882 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248432 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.50 reactor_0' 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248432 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.50 reactor_0 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 248432 1 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 248432 1 busy 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=248432 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:02.143 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248439 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.31 reactor_1' 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248439 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.31 reactor_1 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:02.403 11:31:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 248759 00:37:12.401 Initializing NVMe Controllers 00:37:12.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:12.401 Controller IO queue size 256, less than required. 00:37:12.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:12.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:12.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:12.401 Initialization complete. Launching workers. 00:37:12.401 ======================================================== 00:37:12.401 Latency(us) 00:37:12.401 Device Information : IOPS MiB/s Average min max 00:37:12.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16509.72 64.49 15516.05 2428.13 19297.83 00:37:12.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18819.21 73.51 13605.08 7285.45 29996.79 00:37:12.401 ======================================================== 00:37:12.401 Total : 35328.93 138.00 14498.11 2428.13 29996.79 00:37:12.401 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 248432 0 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 248432 0 idle 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=248432 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248432 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.24 reactor_0' 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248432 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.24 reactor_0 00:37:12.401 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 248432 1 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 248432 1 idle 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=248432 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248439 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248439 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:12.402 11:31:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:12.402 11:31:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:12.402 11:31:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:37:12.402 11:31:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:12.402 11:31:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:12.402 11:31:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 248432 0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 248432 0 idle 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=248432 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248432 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.49 reactor_0' 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248432 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.49 reactor_0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 248432 1 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 248432 1 idle 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=248432 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 248432 -w 256 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 248439 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1' 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 248439 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:14.317 11:31:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:14.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:14.578 rmmod nvme_tcp 00:37:14.578 rmmod nvme_fabrics 00:37:14.578 rmmod nvme_keyring 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 248432 ']' 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 248432 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 248432 ']' 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 248432 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 248432 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 248432' 00:37:14.578 killing process with pid 248432 00:37:14.578 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 248432 00:37:14.579 11:31:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 248432 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:14.840 11:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.385 11:31:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:17.385 00:37:17.385 real 0m25.622s 00:37:17.385 user 0m40.549s 00:37:17.385 sys 0m10.182s 00:37:17.385 11:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:17.385 11:31:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:17.385 ************************************ 00:37:17.385 END TEST nvmf_interrupt 00:37:17.385 ************************************ 00:37:17.385 00:37:17.385 real 31m4.436s 00:37:17.385 user 61m31.651s 00:37:17.385 sys 10m54.197s 00:37:17.385 11:31:25 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:17.385 11:31:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:17.385 ************************************ 00:37:17.385 END TEST nvmf_tcp 00:37:17.385 ************************************ 00:37:17.385 11:31:25 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:37:17.385 11:31:25 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:17.385 11:31:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:17.385 11:31:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:17.385 11:31:25 -- common/autotest_common.sh@10 -- # set +x 00:37:17.385 ************************************ 00:37:17.385 START TEST spdkcli_nvmf_tcp 00:37:17.385 ************************************ 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:17.385 * Looking for test storage... 00:37:17.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:17.385 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.386 --rc genhtml_branch_coverage=1 00:37:17.386 --rc genhtml_function_coverage=1 00:37:17.386 --rc genhtml_legend=1 00:37:17.386 --rc geninfo_all_blocks=1 00:37:17.386 --rc geninfo_unexecuted_blocks=1 00:37:17.386 00:37:17.386 ' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.386 --rc genhtml_branch_coverage=1 00:37:17.386 --rc genhtml_function_coverage=1 00:37:17.386 --rc genhtml_legend=1 00:37:17.386 --rc geninfo_all_blocks=1 00:37:17.386 --rc geninfo_unexecuted_blocks=1 00:37:17.386 00:37:17.386 ' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.386 --rc genhtml_branch_coverage=1 00:37:17.386 --rc genhtml_function_coverage=1 00:37:17.386 --rc genhtml_legend=1 00:37:17.386 --rc geninfo_all_blocks=1 00:37:17.386 --rc geninfo_unexecuted_blocks=1 00:37:17.386 00:37:17.386 ' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:17.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.386 --rc genhtml_branch_coverage=1 00:37:17.386 --rc genhtml_function_coverage=1 00:37:17.386 --rc genhtml_legend=1 00:37:17.386 --rc geninfo_all_blocks=1 00:37:17.386 --rc geninfo_unexecuted_blocks=1 00:37:17.386 00:37:17.386 ' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:17.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=251975 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 251975 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 251975 ']' 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:17.386 11:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:17.386 [2024-11-19 11:31:25.531951] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:37:17.386 [2024-11-19 11:31:25.532006] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251975 ] 00:37:17.386 [2024-11-19 11:31:25.611044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:17.386 [2024-11-19 11:31:25.648795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.386 [2024-11-19 11:31:25.648798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:18.327 11:31:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:18.327 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:18.327 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:18.327 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:18.327 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:18.327 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:18.327 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:18.327 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:18.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:18.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:18.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:18.327 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:18.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:18.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:18.327 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:18.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:18.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:18.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:18.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:18.328 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:18.328 ' 00:37:20.870 [2024-11-19 11:31:28.783754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.811 [2024-11-19 11:31:29.991675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:24.351 [2024-11-19 11:31:32.519132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:26.890 [2024-11-19 11:31:34.729720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:28.272 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:28.272 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:28.272 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:28.272 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:28.272 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:28.272 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:28.272 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:28.272 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:28.272 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:28.272 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:28.272 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:28.272 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:28.272 11:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:28.272 11:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:28.272 11:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.272 11:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:28.272 11:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:28.272 11:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.272 11:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:28.272 11:31:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:28.842 11:31:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:28.842 11:31:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:28.842 11:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:28.842 11:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:28.842 11:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.843 11:31:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:28.843 11:31:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:28.843 11:31:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.843 11:31:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:28.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:28.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:28.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:28.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:28.843 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:28.843 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:28.843 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:28.843 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:28.843 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:28.843 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:28.843 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:28.843 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:28.843 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:28.843 ' 00:37:34.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:34.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:34.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:34.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:34.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:34.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:34.230 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:34.230 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:34.230 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:34.230 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:34.230 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:34.230 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:34.230 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:34.230 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 251975 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 251975 ']' 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 251975 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251975 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251975' 00:37:34.230 killing process with pid 251975 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 251975 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 251975 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 251975 ']' 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 251975 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 251975 ']' 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 251975 00:37:34.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (251975) - No such process 00:37:34.230 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 251975 is not found' 00:37:34.231 Process with pid 251975 is not found 00:37:34.231 11:31:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:34.231 11:31:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:34.231 11:31:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:34.231 00:37:34.231 real 0m17.084s 00:37:34.231 user 0m36.580s 00:37:34.231 sys 0m0.769s 00:37:34.231 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.231 11:31:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:34.231 ************************************ 00:37:34.231 END TEST spdkcli_nvmf_tcp 00:37:34.231 ************************************ 00:37:34.231 11:31:42 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:34.231 11:31:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:34.231 11:31:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.231 11:31:42 -- common/autotest_common.sh@10 -- # set +x 00:37:34.231 ************************************ 00:37:34.231 START TEST nvmf_identify_passthru 00:37:34.231 ************************************ 00:37:34.231 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:34.231 * Looking for test storage... 00:37:34.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:34.231 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:34.231 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:37:34.231 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:34.231 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:34.231 11:31:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:34.493 11:31:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:34.493 11:31:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:34.494 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:34.494 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.494 --rc genhtml_branch_coverage=1 00:37:34.494 --rc genhtml_function_coverage=1 00:37:34.494 --rc genhtml_legend=1 00:37:34.494 --rc geninfo_all_blocks=1 00:37:34.494 --rc geninfo_unexecuted_blocks=1 00:37:34.494 00:37:34.494 ' 00:37:34.494 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.494 --rc genhtml_branch_coverage=1 00:37:34.494 --rc genhtml_function_coverage=1 00:37:34.494 --rc genhtml_legend=1 00:37:34.494 --rc geninfo_all_blocks=1 00:37:34.494 --rc geninfo_unexecuted_blocks=1 00:37:34.494 00:37:34.494 ' 00:37:34.494 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.494 --rc genhtml_branch_coverage=1 00:37:34.494 --rc genhtml_function_coverage=1 00:37:34.494 --rc genhtml_legend=1 00:37:34.494 --rc geninfo_all_blocks=1 00:37:34.494 --rc geninfo_unexecuted_blocks=1 00:37:34.494 00:37:34.494 ' 00:37:34.494 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:34.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.494 --rc genhtml_branch_coverage=1 00:37:34.494 --rc genhtml_function_coverage=1 00:37:34.494 --rc genhtml_legend=1 00:37:34.494 --rc geninfo_all_blocks=1 00:37:34.494 --rc geninfo_unexecuted_blocks=1 00:37:34.494 00:37:34.494 ' 00:37:34.494 11:31:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:34.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:34.494 11:31:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.494 11:31:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:34.494 11:31:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.494 11:31:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.494 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:34.494 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:34.494 11:31:42 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:34.494 11:31:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:42.637 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:42.638 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:42.638 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:42.638 Found net devices under 0000:31:00.0: cvl_0_0 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:42.638 Found net devices under 0000:31:00.1: cvl_0_1 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:42.638 11:31:50 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:42.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:42.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:37:42.899 00:37:42.899 --- 10.0.0.2 ping statistics --- 00:37:42.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.899 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:42.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:42.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:37:42.899 00:37:42.899 --- 10.0.0.1 ping statistics --- 00:37:42.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.899 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:42.899 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:42.900 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:42.900 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:42.900 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:42.900 11:31:51 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:42.900 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:42.900 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:42.900 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:43.161 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:43.161 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:37:43.161 11:31:51 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:37:43.161 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:43.161 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:43.161 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:43.161 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:43.161 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:43.422 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:37:43.422 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:43.422 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:43.422 11:31:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:43.993 11:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:43.993 11:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:43.993 11:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:43.993 11:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=259681 00:37:43.993 11:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:43.993 11:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:43.993 11:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 259681 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 259681 ']' 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.993 11:31:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.253 [2024-11-19 11:31:52.348786] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:37:44.253 [2024-11-19 11:31:52.348844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:44.253 [2024-11-19 11:31:52.437632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:44.253 [2024-11-19 11:31:52.480323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:44.253 [2024-11-19 11:31:52.480361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:44.253 [2024-11-19 11:31:52.480369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:44.253 [2024-11-19 11:31:52.480376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:44.253 [2024-11-19 11:31:52.480382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:44.253 [2024-11-19 11:31:52.482174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.253 [2024-11-19 11:31:52.482324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:44.253 [2024-11-19 11:31:52.482474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.253 [2024-11-19 11:31:52.482474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:44.819 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:44.819 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:44.819 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:44.819 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.819 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:44.819 INFO: Log level set to 20 00:37:44.819 INFO: Requests: 00:37:44.819 { 00:37:44.819 "jsonrpc": "2.0", 00:37:44.819 "method": "nvmf_set_config", 00:37:44.819 "id": 1, 00:37:44.819 "params": { 00:37:44.819 "admin_cmd_passthru": { 00:37:44.819 "identify_ctrlr": true 00:37:44.819 } 00:37:44.819 } 00:37:44.819 } 00:37:44.819 00:37:44.819 INFO: response: 00:37:44.819 { 00:37:44.819 "jsonrpc": "2.0", 00:37:44.819 "id": 1, 00:37:44.819 "result": true 00:37:44.819 } 00:37:44.819 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.078 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.078 INFO: Setting log level to 20 00:37:45.078 INFO: Setting log level to 20 00:37:45.078 INFO: Log level set to 20 00:37:45.078 INFO: Log level set to 20 00:37:45.078 INFO: Requests: 00:37:45.078 { 00:37:45.078 "jsonrpc": "2.0", 00:37:45.078 "method": "framework_start_init", 00:37:45.078 "id": 1 00:37:45.078 } 00:37:45.078 00:37:45.078 INFO: Requests: 00:37:45.078 { 00:37:45.078 "jsonrpc": "2.0", 00:37:45.078 "method": "framework_start_init", 00:37:45.078 "id": 1 00:37:45.078 } 00:37:45.078 00:37:45.078 [2024-11-19 11:31:53.231710] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:45.078 INFO: response: 00:37:45.078 { 00:37:45.078 "jsonrpc": "2.0", 00:37:45.078 "id": 1, 00:37:45.078 "result": true 00:37:45.078 } 00:37:45.078 00:37:45.078 INFO: response: 00:37:45.078 { 00:37:45.078 "jsonrpc": "2.0", 00:37:45.078 "id": 1, 00:37:45.078 "result": true 00:37:45.078 } 00:37:45.078 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.078 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.078 INFO: Setting log level to 40 00:37:45.078 INFO: Setting log level to 40 00:37:45.078 INFO: Setting log level to 40 00:37:45.078 [2024-11-19 11:31:53.245029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.078 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.078 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.078 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.337 Nvme0n1 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.338 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.338 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.338 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.338 [2024-11-19 11:31:53.648327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.338 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.338 [ 00:37:45.338 { 00:37:45.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:45.338 "subtype": "Discovery", 00:37:45.338 "listen_addresses": [], 00:37:45.338 "allow_any_host": true, 00:37:45.338 "hosts": [] 00:37:45.338 }, 00:37:45.338 { 00:37:45.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:45.338 "subtype": "NVMe", 00:37:45.338 "listen_addresses": [ 00:37:45.338 { 00:37:45.338 "trtype": "TCP", 00:37:45.338 "adrfam": "IPv4", 00:37:45.338 "traddr": "10.0.0.2", 00:37:45.338 "trsvcid": "4420" 00:37:45.338 } 00:37:45.338 ], 00:37:45.338 "allow_any_host": true, 00:37:45.338 "hosts": [], 00:37:45.338 "serial_number": "SPDK00000000000001", 00:37:45.338 "model_number": "SPDK bdev Controller", 00:37:45.338 "max_namespaces": 1, 00:37:45.338 "min_cntlid": 1, 00:37:45.338 "max_cntlid": 65519, 00:37:45.338 "namespaces": [ 00:37:45.338 { 00:37:45.338 "nsid": 1, 00:37:45.338 "bdev_name": "Nvme0n1", 00:37:45.338 "name": "Nvme0n1", 00:37:45.338 "nguid": "3634473052605494002538450000002D", 00:37:45.338 "uuid": "36344730-5260-5494-0025-38450000002d" 00:37:45.338 } 00:37:45.338 ] 00:37:45.338 } 00:37:45.338 ] 00:37:45.338 11:31:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.338 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:45.338 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:45.338 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:45.596 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:37:45.596 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:45.596 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:45.596 11:31:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:45.855 11:31:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:45.855 11:31:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:37:45.855 11:31:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:45.855 11:31:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:45.855 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.855 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.855 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.855 11:31:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:45.855 11:31:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:45.855 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:45.855 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:45.856 rmmod nvme_tcp 00:37:45.856 rmmod nvme_fabrics 00:37:45.856 rmmod nvme_keyring 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 259681 ']' 00:37:45.856 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 259681 00:37:45.856 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 259681 ']' 00:37:45.856 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 259681 00:37:45.856 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:45.856 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:45.856 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259681 00:37:46.115 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:46.115 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:46.115 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259681' 00:37:46.115 killing process with pid 259681 00:37:46.115 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 259681 00:37:46.115 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 259681 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:46.376 11:31:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.376 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:46.376 11:31:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.290 11:31:56 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:48.290 00:37:48.290 real 0m14.167s 00:37:48.290 user 0m10.606s 00:37:48.290 sys 0m7.350s 00:37:48.290 11:31:56 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.290 11:31:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.290 ************************************ 00:37:48.290 END TEST nvmf_identify_passthru 00:37:48.290 ************************************ 00:37:48.290 11:31:56 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:48.290 11:31:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:48.290 11:31:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.290 11:31:56 -- common/autotest_common.sh@10 -- # set +x 00:37:48.290 ************************************ 00:37:48.290 START TEST nvmf_dif 00:37:48.290 ************************************ 00:37:48.290 11:31:56 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:48.552 * Looking for test storage... 00:37:48.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:48.552 11:31:56 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:48.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.552 --rc genhtml_branch_coverage=1 00:37:48.552 --rc genhtml_function_coverage=1 00:37:48.552 --rc genhtml_legend=1 00:37:48.552 --rc geninfo_all_blocks=1 00:37:48.552 --rc geninfo_unexecuted_blocks=1 00:37:48.552 00:37:48.552 ' 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:48.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.552 --rc genhtml_branch_coverage=1 00:37:48.552 --rc genhtml_function_coverage=1 00:37:48.552 --rc genhtml_legend=1 00:37:48.552 --rc geninfo_all_blocks=1 00:37:48.552 --rc geninfo_unexecuted_blocks=1 00:37:48.552 00:37:48.552 ' 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:48.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.552 --rc genhtml_branch_coverage=1 00:37:48.552 --rc genhtml_function_coverage=1 00:37:48.552 --rc genhtml_legend=1 00:37:48.552 --rc geninfo_all_blocks=1 00:37:48.552 --rc geninfo_unexecuted_blocks=1 00:37:48.552 00:37:48.552 ' 00:37:48.552 11:31:56 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:48.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.552 --rc genhtml_branch_coverage=1 00:37:48.552 --rc genhtml_function_coverage=1 00:37:48.552 --rc genhtml_legend=1 00:37:48.552 --rc geninfo_all_blocks=1 00:37:48.552 --rc geninfo_unexecuted_blocks=1 00:37:48.553 00:37:48.553 ' 00:37:48.553 11:31:56 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:48.553 11:31:56 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:48.553 11:31:56 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.553 11:31:56 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.553 11:31:56 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.553 11:31:56 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.553 11:31:56 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.553 11:31:56 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.553 11:31:56 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:48.553 11:31:56 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:48.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:48.553 11:31:56 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:48.553 11:31:56 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:48.553 11:31:56 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:48.553 11:31:56 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:48.553 11:31:56 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.553 11:31:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:48.553 11:31:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:48.553 11:31:56 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:48.553 11:31:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:56.684 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:56.684 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:56.684 Found net devices under 0000:31:00.0: cvl_0_0 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:56.684 Found net devices under 0000:31:00.1: cvl_0_1 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.684 11:32:04 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:56.685 11:32:04 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:56.685 11:32:05 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:56.943 11:32:05 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:56.943 11:32:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:56.943 11:32:05 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:56.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:56.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:37:56.943 00:37:56.943 --- 10.0.0.2 ping statistics --- 00:37:56.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.943 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:37:56.943 11:32:05 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:56.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:56.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:37:56.943 00:37:56.943 --- 10.0.0.1 ping statistics --- 00:37:56.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.943 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:37:56.943 11:32:05 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:56.943 11:32:05 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:56.943 11:32:05 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:56.943 11:32:05 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:00.239 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:00.239 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:00.239 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:00.499 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:00.499 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:00.499 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:00.499 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:00.924 11:32:08 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:00.924 11:32:08 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:00.924 11:32:08 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:00.924 11:32:08 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:00.924 11:32:08 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:00.924 11:32:08 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:00.924 11:32:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:00.924 11:32:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:00.924 11:32:09 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:00.924 11:32:09 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:00.924 11:32:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:00.924 11:32:09 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=266397 00:38:00.924 11:32:09 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 266397 00:38:00.924 11:32:09 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:00.924 11:32:09 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 266397 ']' 00:38:00.924 11:32:09 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:00.924 11:32:09 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.924 11:32:09 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.924 11:32:09 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.924 11:32:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:00.924 [2024-11-19 11:32:09.067338] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:38:00.924 [2024-11-19 11:32:09.067388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:00.924 [2024-11-19 11:32:09.153137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.925 [2024-11-19 11:32:09.188962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:00.925 [2024-11-19 11:32:09.188993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:00.925 [2024-11-19 11:32:09.189000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:00.925 [2024-11-19 11:32:09.189007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:00.925 [2024-11-19 11:32:09.189013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:00.925 [2024-11-19 11:32:09.189580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.871 11:32:09 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:01.871 11:32:09 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:38:01.871 11:32:09 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:01.872 11:32:09 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:01.872 11:32:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.872 11:32:09 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:01.872 11:32:09 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:01.872 11:32:09 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:01.872 11:32:09 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.872 11:32:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.872 [2024-11-19 11:32:09.914005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.872 11:32:09 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.872 11:32:09 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:01.872 11:32:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:01.872 11:32:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.872 11:32:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.872 ************************************ 00:38:01.872 START TEST fio_dif_1_default 00:38:01.872 ************************************ 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:01.872 bdev_null0 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.872 11:32:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:01.872 [2024-11-19 11:32:09.998347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:01.872 { 00:38:01.872 "params": { 00:38:01.872 "name": "Nvme$subsystem", 00:38:01.872 "trtype": "$TEST_TRANSPORT", 00:38:01.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:01.872 "adrfam": "ipv4", 00:38:01.872 "trsvcid": "$NVMF_PORT", 00:38:01.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:01.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:01.872 "hdgst": ${hdgst:-false}, 00:38:01.872 "ddgst": ${ddgst:-false} 00:38:01.872 }, 00:38:01.872 "method": "bdev_nvme_attach_controller" 00:38:01.872 } 00:38:01.872 EOF 00:38:01.872 )") 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:01.872 "params": { 00:38:01.872 "name": "Nvme0", 00:38:01.872 "trtype": "tcp", 00:38:01.872 "traddr": "10.0.0.2", 00:38:01.872 "adrfam": "ipv4", 00:38:01.872 "trsvcid": "4420", 00:38:01.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:01.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:01.872 "hdgst": false, 00:38:01.872 "ddgst": false 00:38:01.872 }, 00:38:01.872 "method": "bdev_nvme_attach_controller" 00:38:01.872 }' 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:01.872 11:32:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:02.131 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:02.131 fio-3.35 00:38:02.131 Starting 1 thread 00:38:14.368 00:38:14.368 filename0: (groupid=0, jobs=1): err= 0: pid=266933: Tue Nov 19 11:32:20 2024 00:38:14.368 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:38:14.368 slat (nsec): min=5397, max=57170, avg=6271.08, stdev=2301.62 00:38:14.368 clat (usec): min=40782, max=42934, avg=41064.78, stdev=289.04 00:38:14.368 lat (usec): min=40790, max=42940, avg=41071.05, stdev=289.58 00:38:14.368 clat percentiles (usec): 00:38:14.368 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:14.368 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:14.368 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:38:14.368 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:14.368 | 99.99th=[42730] 00:38:14.368 bw ( KiB/s): min= 384, max= 416, per=99.62%, avg=388.80, stdev=11.72, samples=20 00:38:14.368 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:14.368 lat (msec) : 50=100.00% 00:38:14.368 cpu : usr=93.50%, sys=6.27%, ctx=16, majf=0, minf=250 00:38:14.369 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.369 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.369 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:14.369 00:38:14.369 Run status group 0 (all jobs): 00:38:14.369 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10024-10024msec 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 00:38:14.369 real 0m11.183s 00:38:14.369 user 0m25.531s 00:38:14.369 sys 0m0.946s 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 ************************************ 00:38:14.369 END TEST fio_dif_1_default 00:38:14.369 ************************************ 00:38:14.369 11:32:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:14.369 11:32:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:14.369 11:32:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 ************************************ 00:38:14.369 START TEST fio_dif_1_multi_subsystems 00:38:14.369 ************************************ 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 bdev_null0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 [2024-11-19 11:32:21.261276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 bdev_null1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.369 { 00:38:14.369 "params": { 00:38:14.369 "name": "Nvme$subsystem", 00:38:14.369 "trtype": "$TEST_TRANSPORT", 00:38:14.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.369 "adrfam": "ipv4", 00:38:14.369 "trsvcid": "$NVMF_PORT", 00:38:14.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.369 "hdgst": ${hdgst:-false}, 00:38:14.369 "ddgst": ${ddgst:-false} 00:38:14.369 }, 00:38:14.369 "method": "bdev_nvme_attach_controller" 00:38:14.369 } 00:38:14.369 EOF 00:38:14.369 )") 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.369 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.369 { 00:38:14.369 "params": { 00:38:14.369 "name": "Nvme$subsystem", 00:38:14.369 "trtype": "$TEST_TRANSPORT", 00:38:14.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.369 "adrfam": "ipv4", 00:38:14.369 "trsvcid": "$NVMF_PORT", 00:38:14.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.370 "hdgst": ${hdgst:-false}, 00:38:14.370 "ddgst": ${ddgst:-false} 00:38:14.370 }, 00:38:14.370 "method": "bdev_nvme_attach_controller" 00:38:14.370 } 00:38:14.370 EOF 00:38:14.370 )") 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:14.370 "params": { 00:38:14.370 "name": "Nvme0", 00:38:14.370 "trtype": "tcp", 00:38:14.370 "traddr": "10.0.0.2", 00:38:14.370 "adrfam": "ipv4", 00:38:14.370 "trsvcid": "4420", 00:38:14.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:14.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:14.370 "hdgst": false, 00:38:14.370 "ddgst": false 00:38:14.370 }, 00:38:14.370 "method": "bdev_nvme_attach_controller" 00:38:14.370 },{ 00:38:14.370 "params": { 00:38:14.370 "name": "Nvme1", 00:38:14.370 "trtype": "tcp", 00:38:14.370 "traddr": "10.0.0.2", 00:38:14.370 "adrfam": "ipv4", 00:38:14.370 "trsvcid": "4420", 00:38:14.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:14.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:14.370 "hdgst": false, 00:38:14.370 "ddgst": false 00:38:14.370 }, 00:38:14.370 "method": "bdev_nvme_attach_controller" 00:38:14.370 }' 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:14.370 11:32:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.370 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:14.370 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:14.370 fio-3.35 00:38:14.370 Starting 2 threads 00:38:24.380 00:38:24.380 filename0: (groupid=0, jobs=1): err= 0: pid=269233: Tue Nov 19 11:32:32 2024 00:38:24.380 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10022msec) 00:38:24.380 slat (nsec): min=5403, max=36943, avg=6339.31, stdev=1670.73 00:38:24.380 clat (usec): min=40876, max=43017, avg=41565.28, stdev=536.96 00:38:24.380 lat (usec): min=40884, max=43023, avg=41571.62, stdev=536.94 00:38:24.380 clat percentiles (usec): 00:38:24.380 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:24.380 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:38:24.380 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:24.380 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:38:24.380 | 99.99th=[43254] 00:38:24.380 bw ( KiB/s): min= 384, max= 384, per=33.64%, avg=384.00, stdev= 0.00, samples=20 00:38:24.380 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=20 00:38:24.380 lat (msec) : 50=100.00% 00:38:24.380 cpu : usr=95.11%, sys=4.69%, ctx=13, majf=0, minf=171 00:38:24.380 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.380 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.380 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:24.380 filename1: (groupid=0, jobs=1): err= 0: pid=269235: Tue Nov 19 11:32:32 2024 00:38:24.380 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10037msec) 00:38:24.380 slat (nsec): min=5417, max=26976, avg=6254.37, stdev=1370.72 00:38:24.380 clat (usec): min=733, max=42068, avg=21111.92, stdev=20155.61 00:38:24.380 lat (usec): min=739, max=42073, avg=21118.18, stdev=20155.60 00:38:24.380 clat percentiles (usec): 00:38:24.380 | 1.00th=[ 791], 5.00th=[ 865], 10.00th=[ 881], 20.00th=[ 906], 00:38:24.380 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[40633], 60.00th=[41157], 00:38:24.380 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:38:24.380 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:24.380 | 99.99th=[42206] 00:38:24.380 bw ( KiB/s): min= 704, max= 768, per=66.41%, avg=758.45, stdev=20.89, samples=20 00:38:24.380 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:38:24.380 lat (usec) : 750=0.16%, 1000=49.00% 00:38:24.380 lat (msec) : 2=0.53%, 4=0.21%, 50=50.11% 00:38:24.380 cpu : usr=95.21%, sys=4.58%, ctx=13, majf=0, minf=33 00:38:24.380 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.380 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.380 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:24.380 00:38:24.380 Run status group 0 (all jobs): 00:38:24.380 READ: bw=1141KiB/s (1169kB/s), 385KiB/s-757KiB/s (394kB/s-775kB/s), io=11.2MiB (11.7MB), run=10022-10037msec 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.380 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.642 00:38:24.642 real 0m11.537s 00:38:24.642 user 0m37.192s 00:38:24.642 sys 0m1.304s 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.642 11:32:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 ************************************ 00:38:24.642 END TEST fio_dif_1_multi_subsystems 00:38:24.642 ************************************ 00:38:24.642 11:32:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:24.642 11:32:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:24.642 11:32:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:24.642 11:32:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 ************************************ 00:38:24.642 START TEST fio_dif_rand_params 00:38:24.642 ************************************ 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 bdev_null0 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.642 [2024-11-19 11:32:32.883810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:24.642 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:24.643 { 00:38:24.643 "params": { 00:38:24.643 "name": "Nvme$subsystem", 00:38:24.643 "trtype": "$TEST_TRANSPORT", 00:38:24.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.643 "adrfam": "ipv4", 00:38:24.643 "trsvcid": "$NVMF_PORT", 00:38:24.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.643 "hdgst": ${hdgst:-false}, 00:38:24.643 "ddgst": ${ddgst:-false} 00:38:24.643 }, 00:38:24.643 "method": "bdev_nvme_attach_controller" 00:38:24.643 } 00:38:24.643 EOF 00:38:24.643 )") 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:24.643 "params": { 00:38:24.643 "name": "Nvme0", 00:38:24.643 "trtype": "tcp", 00:38:24.643 "traddr": "10.0.0.2", 00:38:24.643 "adrfam": "ipv4", 00:38:24.643 "trsvcid": "4420", 00:38:24.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.643 "hdgst": false, 00:38:24.643 "ddgst": false 00:38:24.643 }, 00:38:24.643 "method": "bdev_nvme_attach_controller" 00:38:24.643 }' 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:24.643 11:32:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:25.232 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:25.232 ... 00:38:25.232 fio-3.35 00:38:25.232 Starting 3 threads 00:38:31.813 00:38:31.813 filename0: (groupid=0, jobs=1): err= 0: pid=271647: Tue Nov 19 11:32:39 2024 00:38:31.813 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(159MiB/5047msec) 00:38:31.813 slat (nsec): min=5470, max=54496, avg=7735.58, stdev=2038.06 00:38:31.813 clat (usec): min=4513, max=52874, avg=11877.82, stdev=10209.24 00:38:31.813 lat (usec): min=4521, max=52885, avg=11885.56, stdev=10209.32 00:38:31.813 clat percentiles (usec): 00:38:31.813 | 1.00th=[ 5080], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7767], 00:38:31.813 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:38:31.813 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12387], 95.00th=[47973], 00:38:31.813 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:38:31.813 | 99.99th=[52691] 00:38:31.813 bw ( KiB/s): min=19968, max=38400, per=37.11%, avg=32460.80, stdev=5959.66, samples=10 00:38:31.813 iops : min= 156, max= 300, avg=253.60, stdev=46.56, samples=10 00:38:31.813 lat (msec) : 10=61.73%, 20=31.50%, 50=5.12%, 100=1.65% 00:38:31.813 cpu : usr=95.86%, sys=3.90%, ctx=8, majf=0, minf=120 00:38:31.813 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:31.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.813 issued rwts: total=1270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.813 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:31.813 filename0: (groupid=0, jobs=1): err= 0: pid=271648: Tue Nov 19 11:32:39 2024 00:38:31.813 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(143MiB/5047msec) 00:38:31.813 slat (nsec): min=5685, max=32436, avg=8284.75, stdev=1497.87 00:38:31.813 clat (usec): min=5574, max=90504, avg=13186.56, stdev=8346.19 00:38:31.813 lat (usec): min=5582, max=90515, avg=13194.84, stdev=8346.45 00:38:31.813 clat percentiles (usec): 00:38:31.813 | 1.00th=[ 5997], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 9110], 00:38:31.813 | 30.00th=[10159], 40.00th=[11076], 50.00th=[11863], 60.00th=[12911], 00:38:31.813 | 70.00th=[13960], 80.00th=[14746], 90.00th=[15664], 95.00th=[16319], 00:38:31.813 | 99.00th=[51119], 99.50th=[54789], 99.90th=[90702], 99.95th=[90702], 00:38:31.813 | 99.99th=[90702] 00:38:31.813 bw ( KiB/s): min=23040, max=42240, per=33.42%, avg=29235.20, stdev=5693.47, samples=10 00:38:31.813 iops : min= 180, max= 330, avg=228.40, stdev=44.48, samples=10 00:38:31.813 lat (msec) : 10=28.58%, 20=67.83%, 50=2.19%, 100=1.40% 00:38:31.813 cpu : usr=94.87%, sys=4.88%, ctx=8, majf=0, minf=117 00:38:31.813 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:31.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.813 issued rwts: total=1144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.813 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:31.813 filename0: (groupid=0, jobs=1): err= 0: pid=271649: Tue Nov 19 11:32:39 2024 00:38:31.813 read: IOPS=205, BW=25.6MiB/s (26.9MB/s)(129MiB/5046msec) 00:38:31.813 slat (nsec): min=5451, max=34315, avg=8713.77, stdev=1992.82 00:38:31.813 clat (usec): min=5677, max=91750, avg=14571.00, stdev=10620.65 00:38:31.813 lat (usec): min=5689, max=91759, avg=14579.72, stdev=10620.42 00:38:31.813 clat percentiles (usec): 00:38:31.813 | 1.00th=[ 6325], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9896], 00:38:31.813 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12125], 60.00th=[12911], 00:38:31.813 | 70.00th=[13698], 80.00th=[14484], 90.00th=[16057], 95.00th=[49021], 00:38:31.813 | 99.00th=[52691], 99.50th=[53740], 99.90th=[89654], 99.95th=[91751], 00:38:31.813 | 99.99th=[91751] 00:38:31.813 bw ( KiB/s): min=13824, max=35328, per=30.23%, avg=26444.80, stdev=5822.77, samples=10 00:38:31.813 iops : min= 108, max= 276, avg=206.60, stdev=45.49, samples=10 00:38:31.813 lat (msec) : 10=20.48%, 20=72.66%, 50=3.77%, 100=3.09% 00:38:31.813 cpu : usr=94.81%, sys=4.92%, ctx=9, majf=0, minf=75 00:38:31.813 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:31.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:31.813 issued rwts: total=1035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:31.813 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:31.813 00:38:31.813 Run status group 0 (all jobs): 00:38:31.813 READ: bw=85.4MiB/s (89.6MB/s), 25.6MiB/s-31.5MiB/s (26.9MB/s-33.0MB/s), io=431MiB (452MB), run=5046-5047msec 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.813 bdev_null0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.813 [2024-11-19 11:32:39.250030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:31.813 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.814 bdev_null1 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.814 bdev_null2 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:31.814 { 00:38:31.814 "params": { 00:38:31.814 "name": "Nvme$subsystem", 00:38:31.814 "trtype": "$TEST_TRANSPORT", 00:38:31.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:31.814 "adrfam": "ipv4", 00:38:31.814 "trsvcid": "$NVMF_PORT", 00:38:31.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:31.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:31.814 "hdgst": ${hdgst:-false}, 00:38:31.814 "ddgst": ${ddgst:-false} 00:38:31.814 }, 00:38:31.814 "method": "bdev_nvme_attach_controller" 00:38:31.814 } 00:38:31.814 EOF 00:38:31.814 )") 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:31.814 { 00:38:31.814 "params": { 00:38:31.814 "name": "Nvme$subsystem", 00:38:31.814 "trtype": "$TEST_TRANSPORT", 00:38:31.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:31.814 "adrfam": "ipv4", 00:38:31.814 "trsvcid": "$NVMF_PORT", 00:38:31.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:31.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:31.814 "hdgst": ${hdgst:-false}, 00:38:31.814 "ddgst": ${ddgst:-false} 00:38:31.814 }, 00:38:31.814 "method": "bdev_nvme_attach_controller" 00:38:31.814 } 00:38:31.814 EOF 00:38:31.814 )") 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:31.814 { 00:38:31.814 "params": { 00:38:31.814 "name": "Nvme$subsystem", 00:38:31.814 "trtype": "$TEST_TRANSPORT", 00:38:31.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:31.814 "adrfam": "ipv4", 00:38:31.814 "trsvcid": "$NVMF_PORT", 00:38:31.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:31.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:31.814 "hdgst": ${hdgst:-false}, 00:38:31.814 "ddgst": ${ddgst:-false} 00:38:31.814 }, 00:38:31.814 "method": "bdev_nvme_attach_controller" 00:38:31.814 } 00:38:31.814 EOF 00:38:31.814 )") 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:31.814 11:32:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:31.814 "params": { 00:38:31.814 "name": "Nvme0", 00:38:31.814 "trtype": "tcp", 00:38:31.814 "traddr": "10.0.0.2", 00:38:31.814 "adrfam": "ipv4", 00:38:31.814 "trsvcid": "4420", 00:38:31.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:31.814 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:31.814 "hdgst": false, 00:38:31.814 "ddgst": false 00:38:31.814 }, 00:38:31.814 "method": "bdev_nvme_attach_controller" 00:38:31.814 },{ 00:38:31.814 "params": { 00:38:31.814 "name": "Nvme1", 00:38:31.814 "trtype": "tcp", 00:38:31.814 "traddr": "10.0.0.2", 00:38:31.814 "adrfam": "ipv4", 00:38:31.814 "trsvcid": "4420", 00:38:31.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:31.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:31.814 "hdgst": false, 00:38:31.814 "ddgst": false 00:38:31.814 }, 00:38:31.814 "method": "bdev_nvme_attach_controller" 00:38:31.814 },{ 00:38:31.814 "params": { 00:38:31.814 "name": "Nvme2", 00:38:31.814 "trtype": "tcp", 00:38:31.814 "traddr": "10.0.0.2", 00:38:31.814 "adrfam": "ipv4", 00:38:31.814 "trsvcid": "4420", 00:38:31.814 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:31.814 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:31.814 "hdgst": false, 00:38:31.814 "ddgst": false 00:38:31.814 }, 00:38:31.814 "method": "bdev_nvme_attach_controller" 00:38:31.814 }' 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:31.815 11:32:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.815 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:31.815 ... 00:38:31.815 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:31.815 ... 00:38:31.815 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:31.815 ... 00:38:31.815 fio-3.35 00:38:31.815 Starting 24 threads 00:38:44.045 00:38:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=273039: Tue Nov 19 11:32:50 2024 00:38:44.045 read: IOPS=543, BW=2172KiB/s (2224kB/s)(21.2MiB/10017msec) 00:38:44.045 slat (nsec): min=5588, max=92814, avg=9100.59, stdev=7498.25 00:38:44.045 clat (usec): min=10509, max=34470, avg=29385.11, stdev=5173.00 00:38:44.045 lat (usec): min=10600, max=34476, avg=29394.21, stdev=5173.21 00:38:44.045 clat percentiles (usec): 00:38:44.045 | 1.00th=[16319], 5.00th=[20317], 10.00th=[21365], 20.00th=[22676], 00:38:44.045 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:38:44.045 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:38:44.045 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:38:44.045 | 99.99th=[34341] 00:38:44.045 bw ( KiB/s): min= 1920, max= 2554, per=4.58%, avg=2169.00, stdev=220.50, samples=20 00:38:44.045 iops : min= 480, max= 638, avg=542.20, stdev=55.03, samples=20 00:38:44.045 lat (msec) : 20=4.12%, 50=95.88% 00:38:44.045 cpu : usr=99.00%, sys=0.73%, ctx=14, majf=0, minf=39 00:38:44.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.045 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=273040: Tue Nov 19 11:32:50 2024 00:38:44.045 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:38:44.045 slat (nsec): min=5647, max=91351, avg=24858.33, stdev=16079.32 00:38:44.045 clat (usec): min=16158, max=48848, avg=32569.35, stdev=1535.68 00:38:44.045 lat (usec): min=16165, max=48864, avg=32594.20, stdev=1534.07 00:38:44.045 clat percentiles (usec): 00:38:44.045 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:44.045 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.045 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.045 | 99.00th=[34866], 99.50th=[35914], 99.90th=[49021], 99.95th=[49021], 00:38:44.045 | 99.99th=[49021] 00:38:44.045 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1946.42, stdev=67.93, samples=19 00:38:44.045 iops : min= 448, max= 512, avg=486.53, stdev=17.02, samples=19 00:38:44.045 lat (msec) : 20=0.33%, 50=99.67% 00:38:44.045 cpu : usr=98.78%, sys=0.81%, ctx=91, majf=0, minf=19 00:38:44.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.045 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=273041: Tue Nov 19 11:32:50 2024 00:38:44.045 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10016msec) 00:38:44.045 slat (nsec): min=5721, max=77553, avg=18791.54, stdev=11211.58 00:38:44.045 clat (usec): min=10345, max=52410, avg=32468.64, stdev=2032.54 00:38:44.045 lat (usec): min=10354, max=52419, avg=32487.44, stdev=2032.12 00:38:44.045 clat percentiles (usec): 00:38:44.045 | 1.00th=[18220], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:44.045 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.045 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.045 | 99.00th=[34341], 99.50th=[36439], 99.90th=[38536], 99.95th=[49546], 00:38:44.045 | 99.99th=[52167] 00:38:44.045 bw ( KiB/s): min= 1788, max= 2164, per=4.14%, avg=1958.20, stdev=80.39, samples=20 00:38:44.045 iops : min= 447, max= 541, avg=489.55, stdev=20.10, samples=20 00:38:44.046 lat (msec) : 20=1.06%, 50=98.90%, 100=0.04% 00:38:44.046 cpu : usr=98.57%, sys=1.01%, ctx=59, majf=0, minf=15 00:38:44.046 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.1%, 16=8.2%, 32=0.0%, >=64=0.0% 00:38:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.046 filename0: (groupid=0, jobs=1): err= 0: pid=273042: Tue Nov 19 11:32:50 2024 00:38:44.046 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10004msec) 00:38:44.046 slat (nsec): min=5651, max=55502, avg=12444.46, stdev=7994.04 00:38:44.046 clat (usec): min=19903, max=45094, avg=32690.28, stdev=1642.83 00:38:44.046 lat (usec): min=19911, max=45100, avg=32702.73, stdev=1643.55 00:38:44.046 clat percentiles (usec): 00:38:44.046 | 1.00th=[27395], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:38:44.046 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.046 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:38:44.046 | 99.00th=[36963], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:38:44.046 | 99.99th=[45351] 00:38:44.046 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1953.00, stdev=57.74, samples=19 00:38:44.046 iops : min= 479, max= 512, avg=488.21, stdev=14.37, samples=19 00:38:44.046 lat (msec) : 20=0.04%, 50=99.96% 00:38:44.046 cpu : usr=98.76%, sys=0.88%, ctx=47, majf=0, minf=16 00:38:44.046 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:38:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.046 filename0: (groupid=0, jobs=1): err= 0: pid=273044: Tue Nov 19 11:32:50 2024 00:38:44.046 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10005msec) 00:38:44.046 slat (nsec): min=5606, max=87462, avg=19832.21, stdev=14909.85 00:38:44.046 clat (usec): min=17496, max=53120, avg=32632.82, stdev=1678.68 00:38:44.046 lat (usec): min=17503, max=53138, avg=32652.65, stdev=1677.44 00:38:44.046 clat percentiles (usec): 00:38:44.046 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:44.046 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.046 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.046 | 99.00th=[34866], 99.50th=[36963], 99.90th=[53216], 99.95th=[53216], 00:38:44.046 | 99.99th=[53216] 00:38:44.046 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1946.53, stdev=80.88, samples=19 00:38:44.046 iops : min= 448, max= 512, avg=486.63, stdev=20.22, samples=19 00:38:44.046 lat (msec) : 20=0.14%, 50=99.53%, 100=0.33% 00:38:44.046 cpu : usr=98.34%, sys=1.23%, ctx=56, majf=0, minf=14 00:38:44.046 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.046 filename0: (groupid=0, jobs=1): err= 0: pid=273045: Tue Nov 19 11:32:50 2024 00:38:44.046 read: IOPS=489, BW=1960KiB/s (2007kB/s)(19.2MiB/10026msec) 00:38:44.046 slat (nsec): min=5619, max=79109, avg=12878.32, stdev=10508.08 00:38:44.046 clat (usec): min=13413, max=37334, avg=32553.06, stdev=1682.03 00:38:44.046 lat (usec): min=13445, max=37340, avg=32565.93, stdev=1681.25 00:38:44.046 clat percentiles (usec): 00:38:44.046 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:38:44.046 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.046 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.046 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36963], 99.95th=[37487], 00:38:44.046 | 99.99th=[37487] 00:38:44.046 bw ( KiB/s): min= 1916, max= 2052, per=4.14%, avg=1958.20, stdev=60.78, samples=20 00:38:44.046 iops : min= 479, max= 513, avg=489.55, stdev=15.20, samples=20 00:38:44.046 lat (msec) : 20=0.65%, 50=99.35% 00:38:44.046 cpu : usr=99.01%, sys=0.71%, ctx=11, majf=0, minf=22 00:38:44.046 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.046 filename0: (groupid=0, jobs=1): err= 0: pid=273046: Tue Nov 19 11:32:50 2024 00:38:44.046 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.6MiB/10027msec) 00:38:44.046 slat (usec): min=5, max=109, avg=15.07, stdev=12.49 00:38:44.046 clat (usec): min=10610, max=36817, avg=31901.56, stdev=3241.05 00:38:44.046 lat (usec): min=10621, max=36824, avg=31916.63, stdev=3240.09 00:38:44.046 clat percentiles (usec): 00:38:44.046 | 1.00th=[13566], 5.00th=[23462], 10.00th=[31851], 20.00th=[32113], 00:38:44.046 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.046 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.046 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:38:44.046 | 99.99th=[36963] 00:38:44.046 bw ( KiB/s): min= 1916, max= 2304, per=4.22%, avg=1996.35, stdev=104.77, samples=20 00:38:44.046 iops : min= 479, max= 576, avg=499.05, stdev=26.13, samples=20 00:38:44.046 lat (msec) : 20=1.76%, 50=98.24% 00:38:44.046 cpu : usr=98.85%, sys=0.83%, ctx=116, majf=0, minf=24 00:38:44.046 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.046 filename0: (groupid=0, jobs=1): err= 0: pid=273047: Tue Nov 19 11:32:50 2024 00:38:44.046 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:38:44.046 slat (nsec): min=5641, max=94048, avg=25139.40, stdev=14427.08 00:38:44.046 clat (usec): min=15892, max=48764, avg=32568.06, stdev=1532.83 00:38:44.046 lat (usec): min=15912, max=48780, avg=32593.20, stdev=1531.53 00:38:44.046 clat percentiles (usec): 00:38:44.046 | 1.00th=[31589], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:44.046 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.046 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.046 | 99.00th=[34866], 99.50th=[35914], 99.90th=[48497], 99.95th=[48497], 00:38:44.046 | 99.99th=[49021] 00:38:44.046 bw ( KiB/s): min= 1788, max= 2048, per=4.11%, avg=1946.42, stdev=68.66, samples=19 00:38:44.046 iops : min= 447, max= 512, avg=486.53, stdev=17.12, samples=19 00:38:44.046 lat (msec) : 20=0.33%, 50=99.67% 00:38:44.046 cpu : usr=98.71%, sys=0.85%, ctx=50, majf=0, minf=17 00:38:44.046 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=273048: Tue Nov 19 11:32:50 2024 00:38:44.046 read: IOPS=491, BW=1965KiB/s (2013kB/s)(19.2MiB/10001msec) 00:38:44.046 slat (nsec): min=5937, max=69486, avg=15546.99, stdev=9856.13 00:38:44.046 clat (usec): min=12266, max=71689, avg=32430.72, stdev=3582.72 00:38:44.046 lat (usec): min=12300, max=71713, avg=32446.27, stdev=3582.88 00:38:44.046 clat percentiles (usec): 00:38:44.046 | 1.00th=[19006], 5.00th=[29492], 10.00th=[32113], 20.00th=[32113], 00:38:44.046 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.046 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:38:44.046 | 99.00th=[42206], 99.50th=[46400], 99.90th=[71828], 99.95th=[71828], 00:38:44.046 | 99.99th=[71828] 00:38:44.046 bw ( KiB/s): min= 1792, max= 2123, per=4.14%, avg=1960.95, stdev=74.83, samples=19 00:38:44.046 iops : min= 448, max= 530, avg=490.16, stdev=18.61, samples=19 00:38:44.046 lat (msec) : 20=1.14%, 50=98.37%, 100=0.49% 00:38:44.046 cpu : usr=98.81%, sys=0.90%, ctx=38, majf=0, minf=14 00:38:44.046 IO depths : 1=4.6%, 2=9.6%, 4=20.2%, 8=56.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:38:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 complete : 0=0.0%, 4=93.1%, 8=2.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=273049: Tue Nov 19 11:32:50 2024 00:38:44.046 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10017msec) 00:38:44.046 slat (nsec): min=5653, max=67674, avg=16235.31, stdev=10345.63 00:38:44.046 clat (usec): min=10664, max=36450, avg=32384.10, stdev=2401.13 00:38:44.046 lat (usec): min=10674, max=36458, avg=32400.33, stdev=2400.73 00:38:44.046 clat percentiles (usec): 00:38:44.046 | 1.00th=[16450], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:44.046 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.046 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.046 | 99.00th=[34341], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:38:44.046 | 99.99th=[36439] 00:38:44.046 bw ( KiB/s): min= 1788, max= 2304, per=4.15%, avg=1964.40, stdev=104.48, samples=20 00:38:44.046 iops : min= 447, max= 576, avg=491.10, stdev=26.12, samples=20 00:38:44.046 lat (msec) : 20=1.62%, 50=98.38% 00:38:44.046 cpu : usr=99.03%, sys=0.69%, ctx=29, majf=0, minf=23 00:38:44.046 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.046 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=273050: Tue Nov 19 11:32:50 2024 00:38:44.046 read: IOPS=501, BW=2004KiB/s (2053kB/s)(19.6MiB/10002msec) 00:38:44.047 slat (nsec): min=5583, max=69668, avg=12208.95, stdev=7695.63 00:38:44.047 clat (usec): min=13244, max=48547, avg=31830.99, stdev=4130.40 00:38:44.047 lat (usec): min=13250, max=48557, avg=31843.20, stdev=4131.34 00:38:44.047 clat percentiles (usec): 00:38:44.047 | 1.00th=[14615], 5.00th=[22676], 10.00th=[26608], 20.00th=[32113], 00:38:44.047 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.047 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:38:44.047 | 99.00th=[43254], 99.50th=[44827], 99.90th=[48497], 99.95th=[48497], 00:38:44.047 | 99.99th=[48497] 00:38:44.047 bw ( KiB/s): min= 1916, max= 2496, per=4.25%, avg=2009.26, stdev=141.09, samples=19 00:38:44.047 iops : min= 479, max= 624, avg=502.32, stdev=35.27, samples=19 00:38:44.047 lat (msec) : 20=1.72%, 50=98.28% 00:38:44.047 cpu : usr=97.97%, sys=1.26%, ctx=425, majf=0, minf=13 00:38:44.047 IO depths : 1=3.7%, 2=8.5%, 4=20.2%, 8=58.5%, 16=9.2%, 32=0.0%, >=64=0.0% 00:38:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 complete : 0=0.0%, 4=92.9%, 8=1.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 issued rwts: total=5012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.047 filename1: (groupid=0, jobs=1): err= 0: pid=273051: Tue Nov 19 11:32:50 2024 00:38:44.047 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10004msec) 00:38:44.047 slat (nsec): min=5596, max=92273, avg=23353.78, stdev=14814.01 00:38:44.047 clat (usec): min=4819, max=56673, avg=32586.52, stdev=2042.70 00:38:44.047 lat (usec): min=4825, max=56689, avg=32609.87, stdev=2042.00 00:38:44.047 clat percentiles (usec): 00:38:44.047 | 1.00th=[31589], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:44.047 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.047 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.047 | 99.00th=[35914], 99.50th=[36439], 99.90th=[49546], 99.95th=[49546], 00:38:44.047 | 99.99th=[56886] 00:38:44.047 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1946.42, stdev=67.93, samples=19 00:38:44.047 iops : min= 448, max= 512, avg=486.53, stdev=17.02, samples=19 00:38:44.047 lat (msec) : 10=0.33%, 50=99.63%, 100=0.04% 00:38:44.047 cpu : usr=99.07%, sys=0.66%, ctx=14, majf=0, minf=21 00:38:44.047 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.047 filename1: (groupid=0, jobs=1): err= 0: pid=273053: Tue Nov 19 11:32:50 2024 00:38:44.047 read: IOPS=491, BW=1967KiB/s (2015kB/s)(19.3MiB/10025msec) 00:38:44.047 slat (nsec): min=5609, max=98684, avg=12420.17, stdev=10059.85 00:38:44.047 clat (usec): min=13384, max=45555, avg=32430.52, stdev=2558.84 00:38:44.047 lat (usec): min=13398, max=45562, avg=32442.94, stdev=2558.52 00:38:44.047 clat percentiles (usec): 00:38:44.047 | 1.00th=[19006], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:38:44.047 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.047 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.047 | 99.00th=[39060], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:38:44.047 | 99.99th=[45351] 00:38:44.047 bw ( KiB/s): min= 1916, max= 2072, per=4.15%, avg=1965.80, stdev=65.10, samples=20 00:38:44.047 iops : min= 479, max= 518, avg=491.45, stdev=16.28, samples=20 00:38:44.047 lat (msec) : 20=1.12%, 50=98.88% 00:38:44.047 cpu : usr=98.90%, sys=0.75%, ctx=36, majf=0, minf=17 00:38:44.047 IO depths : 1=5.7%, 2=11.7%, 4=24.2%, 8=51.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 issued rwts: total=4931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.047 filename1: (groupid=0, jobs=1): err= 0: pid=273054: Tue Nov 19 11:32:50 2024 00:38:44.047 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10006msec) 00:38:44.047 slat (nsec): min=5610, max=90137, avg=22888.24, stdev=14699.41 00:38:44.047 clat (usec): min=15903, max=53254, avg=32581.72, stdev=1698.57 00:38:44.047 lat (usec): min=15909, max=53271, avg=32604.61, stdev=1697.53 00:38:44.047 clat percentiles (usec): 00:38:44.047 | 1.00th=[31589], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:44.047 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.047 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.047 | 99.00th=[34866], 99.50th=[35914], 99.90th=[53216], 99.95th=[53216], 00:38:44.047 | 99.99th=[53216] 00:38:44.047 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1946.53, stdev=68.70, samples=19 00:38:44.047 iops : min= 448, max= 512, avg=486.63, stdev=17.18, samples=19 00:38:44.047 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:38:44.047 cpu : usr=98.93%, sys=0.79%, ctx=14, majf=0, minf=19 00:38:44.047 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.047 filename1: (groupid=0, jobs=1): err= 0: pid=273055: Tue Nov 19 11:32:50 2024 00:38:44.047 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:38:44.047 slat (nsec): min=5588, max=93241, avg=22655.49, stdev=17575.68 00:38:44.047 clat (usec): min=16154, max=57825, avg=32601.49, stdev=1713.66 00:38:44.047 lat (usec): min=16160, max=57844, avg=32624.15, stdev=1711.30 00:38:44.047 clat percentiles (usec): 00:38:44.047 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:44.047 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.047 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.047 | 99.00th=[35390], 99.50th=[35914], 99.90th=[50594], 99.95th=[50594], 00:38:44.047 | 99.99th=[57934] 00:38:44.047 bw ( KiB/s): min= 1788, max= 2048, per=4.11%, avg=1946.05, stdev=68.81, samples=19 00:38:44.047 iops : min= 447, max= 512, avg=486.47, stdev=17.14, samples=19 00:38:44.047 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:38:44.047 cpu : usr=98.88%, sys=0.76%, ctx=79, majf=0, minf=17 00:38:44.047 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.047 filename1: (groupid=0, jobs=1): err= 0: pid=273056: Tue Nov 19 11:32:50 2024 00:38:44.047 read: IOPS=518, BW=2074KiB/s (2124kB/s)(20.3MiB/10027msec) 00:38:44.047 slat (nsec): min=5609, max=98655, avg=11211.81, stdev=9853.80 00:38:44.047 clat (usec): min=10601, max=36828, avg=30757.96, stdev=4366.48 00:38:44.047 lat (usec): min=10628, max=36834, avg=30769.17, stdev=4366.29 00:38:44.047 clat percentiles (usec): 00:38:44.047 | 1.00th=[15926], 5.00th=[21365], 10.00th=[22676], 20.00th=[31851], 00:38:44.047 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.047 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:38:44.047 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:38:44.047 | 99.99th=[36963] 00:38:44.047 bw ( KiB/s): min= 1920, max= 2304, per=4.38%, avg=2073.10, stdev=140.83, samples=20 00:38:44.047 iops : min= 480, max= 576, avg=518.20, stdev=35.12, samples=20 00:38:44.047 lat (msec) : 20=2.77%, 50=97.23% 00:38:44.047 cpu : usr=98.78%, sys=0.93%, ctx=17, majf=0, minf=20 00:38:44.047 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=273057: Tue Nov 19 11:32:50 2024 00:38:44.047 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10008msec) 00:38:44.047 slat (nsec): min=5582, max=69430, avg=16776.48, stdev=9110.85 00:38:44.047 clat (usec): min=15988, max=43993, avg=32551.78, stdev=1479.38 00:38:44.047 lat (usec): min=16017, max=44002, avg=32568.56, stdev=1479.02 00:38:44.047 clat percentiles (usec): 00:38:44.047 | 1.00th=[26870], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:44.047 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.047 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.047 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:38:44.047 | 99.99th=[43779] 00:38:44.047 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1959.89, stdev=60.35, samples=19 00:38:44.047 iops : min= 480, max= 512, avg=489.89, stdev=14.97, samples=19 00:38:44.047 lat (msec) : 20=0.65%, 50=99.35% 00:38:44.047 cpu : usr=99.02%, sys=0.72%, ctx=15, majf=0, minf=21 00:38:44.047 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.047 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=273058: Tue Nov 19 11:32:50 2024 00:38:44.047 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10016msec) 00:38:44.047 slat (nsec): min=5650, max=56806, avg=15135.46, stdev=9244.17 00:38:44.047 clat (usec): min=13436, max=36448, avg=32489.55, stdev=1809.83 00:38:44.047 lat (usec): min=13445, max=36455, avg=32504.68, stdev=1810.33 00:38:44.047 clat percentiles (usec): 00:38:44.047 | 1.00th=[22414], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:44.047 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.047 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.047 | 99.00th=[34341], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:38:44.047 | 99.99th=[36439] 00:38:44.047 bw ( KiB/s): min= 1788, max= 2052, per=4.14%, avg=1958.20, stdev=73.98, samples=20 00:38:44.047 iops : min= 447, max= 513, avg=489.55, stdev=18.49, samples=20 00:38:44.048 lat (msec) : 20=0.98%, 50=99.02% 00:38:44.048 cpu : usr=98.87%, sys=0.81%, ctx=87, majf=0, minf=18 00:38:44.048 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.048 filename2: (groupid=0, jobs=1): err= 0: pid=273059: Tue Nov 19 11:32:50 2024 00:38:44.048 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10003msec) 00:38:44.048 slat (nsec): min=5576, max=57216, avg=12334.28, stdev=9495.80 00:38:44.048 clat (usec): min=4936, max=81635, avg=32659.69, stdev=4847.27 00:38:44.048 lat (usec): min=4942, max=81656, avg=32672.03, stdev=4847.43 00:38:44.048 clat percentiles (usec): 00:38:44.048 | 1.00th=[20841], 5.00th=[25035], 10.00th=[26870], 20.00th=[31851], 00:38:44.048 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.048 | 70.00th=[33424], 80.00th=[33817], 90.00th=[36963], 95.00th=[40109], 00:38:44.048 | 99.00th=[47449], 99.50th=[49546], 99.90th=[81265], 99.95th=[81265], 00:38:44.048 | 99.99th=[81265] 00:38:44.048 bw ( KiB/s): min= 1728, max= 2048, per=4.12%, avg=1950.05, stdev=74.70, samples=19 00:38:44.048 iops : min= 432, max= 512, avg=487.47, stdev=18.67, samples=19 00:38:44.048 lat (msec) : 10=0.20%, 20=0.65%, 50=98.65%, 100=0.49% 00:38:44.048 cpu : usr=98.86%, sys=0.86%, ctx=14, majf=0, minf=18 00:38:44.048 IO depths : 1=1.2%, 2=2.4%, 4=6.5%, 8=75.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:38:44.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 complete : 0=0.0%, 4=89.8%, 8=7.6%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 issued rwts: total=4890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.048 filename2: (groupid=0, jobs=1): err= 0: pid=273060: Tue Nov 19 11:32:50 2024 00:38:44.048 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:38:44.048 slat (nsec): min=5394, max=82062, avg=19581.75, stdev=14508.90 00:38:44.048 clat (usec): min=16188, max=49815, avg=32637.27, stdev=1709.99 00:38:44.048 lat (usec): min=16194, max=49830, avg=32656.85, stdev=1708.48 00:38:44.048 clat percentiles (usec): 00:38:44.048 | 1.00th=[26870], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:44.048 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.048 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.048 | 99.00th=[35914], 99.50th=[39060], 99.90th=[49546], 99.95th=[49546], 00:38:44.048 | 99.99th=[50070] 00:38:44.048 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1946.26, stdev=68.30, samples=19 00:38:44.048 iops : min= 448, max= 512, avg=486.53, stdev=17.02, samples=19 00:38:44.048 lat (msec) : 20=0.33%, 50=99.67% 00:38:44.048 cpu : usr=98.27%, sys=1.14%, ctx=162, majf=0, minf=23 00:38:44.048 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:44.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.048 filename2: (groupid=0, jobs=1): err= 0: pid=273061: Tue Nov 19 11:32:50 2024 00:38:44.048 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:38:44.048 slat (nsec): min=5671, max=88599, avg=24020.23, stdev=14650.41 00:38:44.048 clat (usec): min=15899, max=50423, avg=32568.16, stdev=1590.79 00:38:44.048 lat (usec): min=15922, max=50439, avg=32592.18, stdev=1589.66 00:38:44.048 clat percentiles (usec): 00:38:44.048 | 1.00th=[31589], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:44.048 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.048 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.048 | 99.00th=[34866], 99.50th=[35914], 99.90th=[50594], 99.95th=[50594], 00:38:44.048 | 99.99th=[50594] 00:38:44.048 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1946.26, stdev=68.30, samples=19 00:38:44.048 iops : min= 448, max= 512, avg=486.53, stdev=17.02, samples=19 00:38:44.048 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:38:44.048 cpu : usr=98.62%, sys=0.93%, ctx=58, majf=0, minf=18 00:38:44.048 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:44.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.048 filename2: (groupid=0, jobs=1): err= 0: pid=273063: Tue Nov 19 11:32:50 2024 00:38:44.048 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.0MiB/10003msec) 00:38:44.048 slat (nsec): min=5593, max=82306, avg=18692.88, stdev=12734.62 00:38:44.048 clat (usec): min=14508, max=71851, avg=32715.41, stdev=3849.65 00:38:44.048 lat (usec): min=14531, max=71870, avg=32734.11, stdev=3849.38 00:38:44.048 clat percentiles (usec): 00:38:44.048 | 1.00th=[23462], 5.00th=[27132], 10.00th=[31589], 20.00th=[32113], 00:38:44.048 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.048 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[38536], 00:38:44.048 | 99.00th=[43254], 99.50th=[50070], 99.90th=[71828], 99.95th=[71828], 00:38:44.048 | 99.99th=[71828] 00:38:44.048 bw ( KiB/s): min= 1760, max= 2048, per=4.11%, avg=1943.26, stdev=67.41, samples=19 00:38:44.048 iops : min= 440, max= 512, avg=485.74, stdev=16.81, samples=19 00:38:44.048 lat (msec) : 20=0.45%, 50=99.06%, 100=0.49% 00:38:44.048 cpu : usr=98.64%, sys=0.95%, ctx=45, majf=0, minf=19 00:38:44.048 IO depths : 1=3.5%, 2=7.1%, 4=15.3%, 8=63.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:38:44.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 complete : 0=0.0%, 4=91.8%, 8=4.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.048 filename2: (groupid=0, jobs=1): err= 0: pid=273064: Tue Nov 19 11:32:50 2024 00:38:44.048 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10016msec) 00:38:44.048 slat (nsec): min=5602, max=87902, avg=15862.54, stdev=12849.92 00:38:44.048 clat (usec): min=17525, max=40400, avg=32593.09, stdev=1342.46 00:38:44.048 lat (usec): min=17531, max=40407, avg=32608.95, stdev=1341.84 00:38:44.048 clat percentiles (usec): 00:38:44.048 | 1.00th=[27657], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:38:44.048 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:44.048 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.048 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38011], 99.95th=[39584], 00:38:44.048 | 99.99th=[40633] 00:38:44.048 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1953.05, stdev=58.31, samples=19 00:38:44.048 iops : min= 479, max= 512, avg=488.26, stdev=14.58, samples=19 00:38:44.048 lat (msec) : 20=0.29%, 50=99.71% 00:38:44.048 cpu : usr=98.86%, sys=0.84%, ctx=36, majf=0, minf=16 00:38:44.048 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:44.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.048 filename2: (groupid=0, jobs=1): err= 0: pid=273065: Tue Nov 19 11:32:50 2024 00:38:44.048 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10006msec) 00:38:44.048 slat (nsec): min=5638, max=96215, avg=17803.38, stdev=12760.47 00:38:44.048 clat (usec): min=10510, max=35757, avg=32113.63, stdev=2857.64 00:38:44.048 lat (usec): min=10525, max=35775, avg=32131.43, stdev=2856.45 00:38:44.048 clat percentiles (usec): 00:38:44.048 | 1.00th=[15926], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:44.048 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:38:44.048 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:44.048 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:38:44.048 | 99.99th=[35914] 00:38:44.048 bw ( KiB/s): min= 1916, max= 2304, per=4.18%, avg=1980.42, stdev=99.00, samples=19 00:38:44.048 iops : min= 479, max= 576, avg=495.11, stdev=24.75, samples=19 00:38:44.048 lat (msec) : 20=1.61%, 50=98.39% 00:38:44.048 cpu : usr=98.53%, sys=0.96%, ctx=78, majf=0, minf=21 00:38:44.048 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:44.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.048 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:44.048 00:38:44.048 Run status group 0 (all jobs): 00:38:44.048 READ: bw=46.2MiB/s (48.4MB/s), 1947KiB/s-2172KiB/s (1994kB/s-2224kB/s), io=463MiB (486MB), run=10001-10027msec 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:44.048 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 bdev_null0 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 [2024-11-19 11:32:51.044078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 bdev_null1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:44.049 { 00:38:44.049 "params": { 00:38:44.049 "name": "Nvme$subsystem", 00:38:44.049 "trtype": "$TEST_TRANSPORT", 00:38:44.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.049 "adrfam": "ipv4", 00:38:44.049 "trsvcid": "$NVMF_PORT", 00:38:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.049 "hdgst": ${hdgst:-false}, 00:38:44.049 "ddgst": ${ddgst:-false} 00:38:44.049 }, 00:38:44.049 "method": "bdev_nvme_attach_controller" 00:38:44.049 } 00:38:44.049 EOF 00:38:44.049 )") 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:44.049 { 00:38:44.049 "params": { 00:38:44.049 "name": "Nvme$subsystem", 00:38:44.049 "trtype": "$TEST_TRANSPORT", 00:38:44.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.049 "adrfam": "ipv4", 00:38:44.049 "trsvcid": "$NVMF_PORT", 00:38:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.049 "hdgst": ${hdgst:-false}, 00:38:44.049 "ddgst": ${ddgst:-false} 00:38:44.049 }, 00:38:44.049 "method": "bdev_nvme_attach_controller" 00:38:44.049 } 00:38:44.049 EOF 00:38:44.049 )") 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:44.049 11:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:44.049 "params": { 00:38:44.049 "name": "Nvme0", 00:38:44.049 "trtype": "tcp", 00:38:44.049 "traddr": "10.0.0.2", 00:38:44.049 "adrfam": "ipv4", 00:38:44.049 "trsvcid": "4420", 00:38:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:44.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:44.049 "hdgst": false, 00:38:44.049 "ddgst": false 00:38:44.050 }, 00:38:44.050 "method": "bdev_nvme_attach_controller" 00:38:44.050 },{ 00:38:44.050 "params": { 00:38:44.050 "name": "Nvme1", 00:38:44.050 "trtype": "tcp", 00:38:44.050 "traddr": "10.0.0.2", 00:38:44.050 "adrfam": "ipv4", 00:38:44.050 "trsvcid": "4420", 00:38:44.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:44.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:44.050 "hdgst": false, 00:38:44.050 "ddgst": false 00:38:44.050 }, 00:38:44.050 "method": "bdev_nvme_attach_controller" 00:38:44.050 }' 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:44.050 11:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.050 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:44.050 ... 00:38:44.050 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:44.050 ... 00:38:44.050 fio-3.35 00:38:44.050 Starting 4 threads 00:38:49.337 00:38:49.337 filename0: (groupid=0, jobs=1): err= 0: pid=275358: Tue Nov 19 11:32:57 2024 00:38:49.337 read: IOPS=2100, BW=16.4MiB/s (17.2MB/s)(82.1MiB/5001msec) 00:38:49.337 slat (nsec): min=5406, max=40854, avg=7662.64, stdev=3685.22 00:38:49.337 clat (usec): min=1430, max=6385, avg=3788.93, stdev=611.96 00:38:49.337 lat (usec): min=1436, max=6391, avg=3796.59, stdev=611.73 00:38:49.337 clat percentiles (usec): 00:38:49.337 | 1.00th=[ 2868], 5.00th=[ 3195], 10.00th=[ 3228], 20.00th=[ 3425], 00:38:49.337 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3687], 60.00th=[ 3752], 00:38:49.337 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4883], 95.00th=[ 5211], 00:38:49.337 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 6194], 99.95th=[ 6325], 00:38:49.337 | 99.99th=[ 6390] 00:38:49.337 bw ( KiB/s): min=16176, max=17360, per=25.22%, avg=16821.33, stdev=359.56, samples=9 00:38:49.337 iops : min= 2022, max= 2170, avg=2102.67, stdev=44.94, samples=9 00:38:49.337 lat (msec) : 2=0.09%, 4=82.18%, 10=17.73% 00:38:49.337 cpu : usr=96.44%, sys=3.30%, ctx=7, majf=0, minf=9 00:38:49.337 IO depths : 1=0.1%, 2=0.1%, 4=70.0%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.337 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.337 issued rwts: total=10506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:49.337 filename0: (groupid=0, jobs=1): err= 0: pid=275359: Tue Nov 19 11:32:57 2024 00:38:49.337 read: IOPS=2114, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5005msec) 00:38:49.337 slat (nsec): min=5411, max=60277, avg=7635.18, stdev=3840.87 00:38:49.337 clat (usec): min=1834, max=6189, avg=3762.79, stdev=646.09 00:38:49.337 lat (usec): min=1853, max=6195, avg=3770.42, stdev=646.06 00:38:49.337 clat percentiles (usec): 00:38:49.337 | 1.00th=[ 2606], 5.00th=[ 2999], 10.00th=[ 3228], 20.00th=[ 3392], 00:38:49.337 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3687], 00:38:49.337 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 5080], 95.00th=[ 5276], 00:38:49.337 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6063], 99.95th=[ 6128], 00:38:49.337 | 99.99th=[ 6194] 00:38:49.337 bw ( KiB/s): min=16400, max=17392, per=25.37%, avg=16924.80, stdev=316.95, samples=10 00:38:49.337 iops : min= 2050, max= 2174, avg=2115.60, stdev=39.62, samples=10 00:38:49.337 lat (msec) : 2=0.06%, 4=82.41%, 10=17.54% 00:38:49.337 cpu : usr=96.84%, sys=2.88%, ctx=7, majf=0, minf=9 00:38:49.337 IO depths : 1=0.1%, 2=0.1%, 4=71.2%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.337 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.337 issued rwts: total=10583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:49.337 filename1: (groupid=0, jobs=1): err= 0: pid=275360: Tue Nov 19 11:32:57 2024 00:38:49.337 read: IOPS=2094, BW=16.4MiB/s (17.2MB/s)(81.8MiB/5002msec) 00:38:49.337 slat (nsec): min=5402, max=62454, avg=7729.97, stdev=3397.66 00:38:49.337 clat (usec): min=1881, max=6659, avg=3800.24, stdev=647.66 00:38:49.337 lat (usec): min=1886, max=6669, avg=3807.97, stdev=647.62 00:38:49.337 clat percentiles (usec): 00:38:49.337 | 1.00th=[ 2737], 5.00th=[ 3130], 10.00th=[ 3294], 20.00th=[ 3425], 00:38:49.337 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3720], 00:38:49.337 | 70.00th=[ 3785], 80.00th=[ 3949], 90.00th=[ 5145], 95.00th=[ 5276], 00:38:49.337 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 6128], 99.95th=[ 6390], 00:38:49.337 | 99.99th=[ 6652] 00:38:49.337 bw ( KiB/s): min=16304, max=17360, per=25.15%, avg=16778.67, stdev=336.19, samples=9 00:38:49.337 iops : min= 2038, max= 2170, avg=2097.33, stdev=42.02, samples=9 00:38:49.337 lat (msec) : 2=0.04%, 4=81.16%, 10=18.80% 00:38:49.337 cpu : usr=96.62%, sys=3.10%, ctx=7, majf=0, minf=9 00:38:49.337 IO depths : 1=0.1%, 2=0.1%, 4=70.6%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.337 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.337 issued rwts: total=10476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:49.338 filename1: (groupid=0, jobs=1): err= 0: pid=275361: Tue Nov 19 11:32:57 2024 00:38:49.338 read: IOPS=2076, BW=16.2MiB/s (17.0MB/s)(81.8MiB/5041msec) 00:38:49.338 slat (nsec): min=5405, max=36886, avg=7206.07, stdev=2539.66 00:38:49.338 clat (usec): min=1833, max=41779, avg=3825.75, stdev=1118.21 00:38:49.338 lat (usec): min=1838, max=41785, avg=3832.95, stdev=1118.25 00:38:49.338 clat percentiles (usec): 00:38:49.338 | 1.00th=[ 2671], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 3425], 00:38:49.338 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3720], 00:38:49.338 | 70.00th=[ 3785], 80.00th=[ 3949], 90.00th=[ 5211], 95.00th=[ 5276], 00:38:49.338 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 8586], 99.95th=[41157], 00:38:49.338 | 99.99th=[41681] 00:38:49.338 bw ( KiB/s): min=16176, max=17856, per=25.10%, avg=16740.80, stdev=602.55, samples=10 00:38:49.338 iops : min= 2022, max= 2232, avg=2092.60, stdev=75.32, samples=10 00:38:49.338 lat (msec) : 2=0.09%, 4=80.51%, 10=19.35%, 50=0.06% 00:38:49.338 cpu : usr=96.39%, sys=3.35%, ctx=7, majf=0, minf=9 00:38:49.338 IO depths : 1=0.1%, 2=0.4%, 4=72.4%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:49.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.338 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:49.338 issued rwts: total=10467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:49.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:49.338 00:38:49.338 Run status group 0 (all jobs): 00:38:49.338 READ: bw=65.1MiB/s (68.3MB/s), 16.2MiB/s-16.5MiB/s (17.0MB/s-17.3MB/s), io=328MiB (344MB), run=5001-5041msec 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.338 00:38:49.338 real 0m24.690s 00:38:49.338 user 5m10.827s 00:38:49.338 sys 0m4.589s 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 ************************************ 00:38:49.338 END TEST fio_dif_rand_params 00:38:49.338 ************************************ 00:38:49.338 11:32:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:49.338 11:32:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:49.338 11:32:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 ************************************ 00:38:49.338 START TEST fio_dif_digest 00:38:49.338 ************************************ 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 bdev_null0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:49.338 [2024-11-19 11:32:57.656008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:49.338 { 00:38:49.338 "params": { 00:38:49.338 "name": "Nvme$subsystem", 00:38:49.338 "trtype": "$TEST_TRANSPORT", 00:38:49.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:49.338 "adrfam": "ipv4", 00:38:49.338 "trsvcid": "$NVMF_PORT", 00:38:49.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:49.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:49.338 "hdgst": ${hdgst:-false}, 00:38:49.338 "ddgst": ${ddgst:-false} 00:38:49.338 }, 00:38:49.338 "method": "bdev_nvme_attach_controller" 00:38:49.338 } 00:38:49.338 EOF 00:38:49.338 )") 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:49.338 11:32:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:49.338 "params": { 00:38:49.339 "name": "Nvme0", 00:38:49.339 "trtype": "tcp", 00:38:49.339 "traddr": "10.0.0.2", 00:38:49.339 "adrfam": "ipv4", 00:38:49.339 "trsvcid": "4420", 00:38:49.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:49.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:49.339 "hdgst": true, 00:38:49.339 "ddgst": true 00:38:49.339 }, 00:38:49.339 "method": "bdev_nvme_attach_controller" 00:38:49.339 }' 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:49.599 11:32:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:49.860 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:49.860 ... 00:38:49.860 fio-3.35 00:38:49.860 Starting 3 threads 00:39:02.099 00:39:02.099 filename0: (groupid=0, jobs=1): err= 0: pid=276722: Tue Nov 19 11:33:08 2024 00:39:02.099 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(268MiB/10009msec) 00:39:02.099 slat (nsec): min=5772, max=31671, avg=7124.28, stdev=1512.44 00:39:02.099 clat (usec): min=8057, max=56184, avg=13974.47, stdev=4973.59 00:39:02.099 lat (usec): min=8066, max=56191, avg=13981.59, stdev=4973.53 00:39:02.099 clat percentiles (usec): 00:39:02.099 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[11994], 20.00th=[12649], 00:39:02.099 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:39:02.099 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:39:02.099 | 99.00th=[53740], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:39:02.099 | 99.99th=[56361] 00:39:02.099 bw ( KiB/s): min=22316, max=29952, per=32.47%, avg=27458.20, stdev=1952.13, samples=20 00:39:02.099 iops : min= 174, max= 234, avg=214.50, stdev=15.30, samples=20 00:39:02.099 lat (msec) : 10=1.72%, 20=96.88%, 100=1.40% 00:39:02.099 cpu : usr=94.42%, sys=5.33%, ctx=27, majf=0, minf=146 00:39:02.099 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:02.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.099 issued rwts: total=2147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.099 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:02.099 filename0: (groupid=0, jobs=1): err= 0: pid=276723: Tue Nov 19 11:33:08 2024 00:39:02.099 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(288MiB/10045msec) 00:39:02.099 slat (nsec): min=5766, max=34262, avg=7148.23, stdev=1633.69 00:39:02.099 clat (usec): min=7112, max=55404, avg=13046.35, stdev=2359.72 00:39:02.099 lat (usec): min=7121, max=55414, avg=13053.49, stdev=2359.78 00:39:02.099 clat percentiles (usec): 00:39:02.100 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[11994], 00:39:02.100 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13435], 00:39:02.100 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15139], 00:39:02.100 | 99.00th=[16188], 99.50th=[16909], 99.90th=[52691], 99.95th=[52691], 00:39:02.100 | 99.99th=[55313] 00:39:02.100 bw ( KiB/s): min=26112, max=32000, per=34.86%, avg=29478.40, stdev=1265.84, samples=20 00:39:02.100 iops : min= 204, max= 250, avg=230.30, stdev= 9.89, samples=20 00:39:02.100 lat (msec) : 10=6.03%, 20=93.75%, 50=0.09%, 100=0.13% 00:39:02.100 cpu : usr=94.05%, sys=5.68%, ctx=20, majf=0, minf=134 00:39:02.100 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:02.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.100 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.100 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:02.100 filename0: (groupid=0, jobs=1): err= 0: pid=276724: Tue Nov 19 11:33:08 2024 00:39:02.100 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(273MiB/10045msec) 00:39:02.100 slat (nsec): min=5775, max=32661, avg=7191.51, stdev=1555.58 00:39:02.100 clat (usec): min=7929, max=56958, avg=13769.38, stdev=3628.19 00:39:02.100 lat (usec): min=7936, max=56964, avg=13776.57, stdev=3628.22 00:39:02.100 clat percentiles (usec): 00:39:02.100 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11731], 20.00th=[12518], 00:39:02.100 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:39:02.100 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15270], 95.00th=[15664], 00:39:02.100 | 99.00th=[16909], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:39:02.100 | 99.99th=[56886] 00:39:02.100 bw ( KiB/s): min=25344, max=29952, per=33.03%, avg=27932.40, stdev=1360.57, samples=20 00:39:02.100 iops : min= 198, max= 234, avg=218.20, stdev=10.62, samples=20 00:39:02.100 lat (msec) : 10=4.21%, 20=95.15%, 100=0.64% 00:39:02.100 cpu : usr=94.13%, sys=5.61%, ctx=18, majf=0, minf=95 00:39:02.100 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:02.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.100 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.100 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:02.100 00:39:02.100 Run status group 0 (all jobs): 00:39:02.100 READ: bw=82.6MiB/s (86.6MB/s), 26.8MiB/s-28.7MiB/s (28.1MB/s-30.1MB/s), io=830MiB (870MB), run=10009-10045msec 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.100 00:39:02.100 real 0m11.220s 00:39:02.100 user 0m42.781s 00:39:02.100 sys 0m1.997s 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:02.100 11:33:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:02.100 ************************************ 00:39:02.100 END TEST fio_dif_digest 00:39:02.100 ************************************ 00:39:02.100 11:33:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:02.100 11:33:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:02.100 rmmod nvme_tcp 00:39:02.100 rmmod nvme_fabrics 00:39:02.100 rmmod nvme_keyring 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 266397 ']' 00:39:02.100 11:33:08 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 266397 00:39:02.100 11:33:08 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 266397 ']' 00:39:02.100 11:33:08 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 266397 00:39:02.100 11:33:08 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:39:02.100 11:33:08 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:02.100 11:33:08 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 266397 00:39:02.100 11:33:09 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:02.100 11:33:09 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:02.100 11:33:09 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 266397' 00:39:02.100 killing process with pid 266397 00:39:02.100 11:33:09 nvmf_dif -- common/autotest_common.sh@973 -- # kill 266397 00:39:02.100 11:33:09 nvmf_dif -- common/autotest_common.sh@978 -- # wait 266397 00:39:02.100 11:33:09 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:02.100 11:33:09 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:04.646 Waiting for block devices as requested 00:39:04.907 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:04.907 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:04.907 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:04.907 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:05.168 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:05.168 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:05.168 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:05.428 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:05.428 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:05.689 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:05.689 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:05.689 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:05.689 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:05.948 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:05.948 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:05.948 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:06.208 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:06.469 11:33:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.469 11:33:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:06.469 11:33:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.381 11:33:16 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:08.381 00:39:08.381 real 1m20.066s 00:39:08.381 user 7m59.292s 00:39:08.381 sys 0m23.334s 00:39:08.381 11:33:16 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:08.381 11:33:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:08.381 ************************************ 00:39:08.381 END TEST nvmf_dif 00:39:08.381 ************************************ 00:39:08.642 11:33:16 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:08.642 11:33:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:08.642 11:33:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:08.642 11:33:16 -- common/autotest_common.sh@10 -- # set +x 00:39:08.642 ************************************ 00:39:08.642 START TEST nvmf_abort_qd_sizes 00:39:08.642 ************************************ 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:08.642 * Looking for test storage... 00:39:08.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:08.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.642 --rc genhtml_branch_coverage=1 00:39:08.642 --rc genhtml_function_coverage=1 00:39:08.642 --rc genhtml_legend=1 00:39:08.642 --rc geninfo_all_blocks=1 00:39:08.642 --rc geninfo_unexecuted_blocks=1 00:39:08.642 00:39:08.642 ' 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:08.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.642 --rc genhtml_branch_coverage=1 00:39:08.642 --rc genhtml_function_coverage=1 00:39:08.642 --rc genhtml_legend=1 00:39:08.642 --rc geninfo_all_blocks=1 00:39:08.642 --rc geninfo_unexecuted_blocks=1 00:39:08.642 00:39:08.642 ' 00:39:08.642 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:08.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.643 --rc genhtml_branch_coverage=1 00:39:08.643 --rc genhtml_function_coverage=1 00:39:08.643 --rc genhtml_legend=1 00:39:08.643 --rc geninfo_all_blocks=1 00:39:08.643 --rc geninfo_unexecuted_blocks=1 00:39:08.643 00:39:08.643 ' 00:39:08.643 11:33:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:08.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.643 --rc genhtml_branch_coverage=1 00:39:08.643 --rc genhtml_function_coverage=1 00:39:08.643 --rc genhtml_legend=1 00:39:08.643 --rc geninfo_all_blocks=1 00:39:08.643 --rc geninfo_unexecuted_blocks=1 00:39:08.643 00:39:08.643 ' 00:39:08.643 11:33:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.643 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.903 11:33:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:08.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:08.903 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:08.904 11:33:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:17.045 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:17.045 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:17.045 Found net devices under 0000:31:00.0: cvl_0_0 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:17.045 Found net devices under 0000:31:00.1: cvl_0_1 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:17.045 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:17.046 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:17.046 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:17.046 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:17.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:17.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:39:17.307 00:39:17.307 --- 10.0.0.2 ping statistics --- 00:39:17.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.307 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:17.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:17.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:39:17.307 00:39:17.307 --- 10.0.0.1 ping statistics --- 00:39:17.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.307 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:17.307 11:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:21.519 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:21.520 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=287644 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 287644 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 287644 ']' 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:21.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:21.520 11:33:29 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:21.520 [2024-11-19 11:33:29.805836] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:39:21.520 [2024-11-19 11:33:29.805907] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:21.781 [2024-11-19 11:33:29.898244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:21.781 [2024-11-19 11:33:29.941356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:21.781 [2024-11-19 11:33:29.941393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:21.781 [2024-11-19 11:33:29.941402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:21.781 [2024-11-19 11:33:29.941409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:21.781 [2024-11-19 11:33:29.941416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:21.781 [2024-11-19 11:33:29.943001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.781 [2024-11-19 11:33:29.943307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:21.781 [2024-11-19 11:33:29.943467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:21.781 [2024-11-19 11:33:29.943468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.353 11:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:22.614 ************************************ 00:39:22.614 START TEST spdk_target_abort 00:39:22.614 ************************************ 00:39:22.614 11:33:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:22.614 11:33:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:22.614 11:33:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:22.614 11:33:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.614 11:33:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.876 spdk_targetn1 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.876 [2024-11-19 11:33:31.023909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.876 [2024-11-19 11:33:31.072192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:22.876 11:33:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:23.138 [2024-11-19 11:33:31.346540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:39:23.138 [2024-11-19 11:33:31.346568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0005 p:1 m:0 dnr:0 00:39:23.138 [2024-11-19 11:33:31.362333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:560 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:39:23.138 [2024-11-19 11:33:31.362350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0047 p:1 m:0 dnr:0 00:39:23.138 [2024-11-19 11:33:31.408362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2264 len:8 PRP1 0x200004aca000 PRP2 0x0 00:39:23.138 [2024-11-19 11:33:31.408380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:39:23.138 [2024-11-19 11:33:31.440290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3448 len:8 PRP1 0x200004aca000 PRP2 0x0 00:39:23.138 [2024-11-19 11:33:31.440306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:39:23.138 [2024-11-19 11:33:31.456367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4032 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:39:23.138 [2024-11-19 11:33:31.456383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:39:26.441 Initializing NVMe Controllers 00:39:26.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:26.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:26.441 Initialization complete. Launching workers. 00:39:26.441 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13467, failed: 5 00:39:26.441 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2480, failed to submit 10992 00:39:26.441 success 753, unsuccessful 1727, failed 0 00:39:26.441 11:33:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:26.441 11:33:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:26.442 [2024-11-19 11:33:34.609950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:296 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:39:26.442 [2024-11-19 11:33:34.609990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0033 p:1 m:0 dnr:0 00:39:26.442 [2024-11-19 11:33:34.679995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1984 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:39:26.442 [2024-11-19 11:33:34.680022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:39:26.442 [2024-11-19 11:33:34.702686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:2552 len:8 PRP1 0x200004e40000 PRP2 0x0 00:39:26.442 [2024-11-19 11:33:34.702710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:26.442 [2024-11-19 11:33:34.753830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3856 len:8 PRP1 0x200004e48000 PRP2 0x0 00:39:26.442 [2024-11-19 11:33:34.753854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00e6 p:0 m:0 dnr:0 00:39:29.744 [2024-11-19 11:33:37.346043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:63032 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:39:29.744 [2024-11-19 11:33:37.346085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:00cd p:1 m:0 dnr:0 00:39:29.745 Initializing NVMe Controllers 00:39:29.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:29.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:29.745 Initialization complete. Launching workers. 00:39:29.745 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8607, failed: 5 00:39:29.745 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7386 00:39:29.745 success 334, unsuccessful 892, failed 0 00:39:29.745 11:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:29.745 11:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:33.042 Initializing NVMe Controllers 00:39:33.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:33.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:33.042 Initialization complete. Launching workers. 00:39:33.042 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42294, failed: 0 00:39:33.042 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2563, failed to submit 39731 00:39:33.042 success 591, unsuccessful 1972, failed 0 00:39:33.042 11:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:33.042 11:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.042 11:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:33.042 11:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.042 11:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:33.042 11:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.042 11:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.425 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.425 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 287644 00:39:34.426 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 287644 ']' 00:39:34.426 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 287644 00:39:34.426 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:34.426 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:34.686 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287644 00:39:34.686 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:34.686 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:34.686 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287644' 00:39:34.686 killing process with pid 287644 00:39:34.686 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 287644 00:39:34.686 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 287644 00:39:34.686 00:39:34.687 real 0m12.245s 00:39:34.687 user 0m49.899s 00:39:34.687 sys 0m1.925s 00:39:34.687 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:34.687 11:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.687 ************************************ 00:39:34.687 END TEST spdk_target_abort 00:39:34.687 ************************************ 00:39:34.687 11:33:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:34.687 11:33:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:34.687 11:33:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:34.687 11:33:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:34.687 ************************************ 00:39:34.687 START TEST kernel_target_abort 00:39:34.687 ************************************ 00:39:34.687 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:34.947 11:33:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:38.248 Waiting for block devices as requested 00:39:38.248 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:38.508 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:38.508 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:38.508 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:38.768 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:38.768 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:38.768 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:38.768 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:39.028 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:39.028 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:39.289 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:39.289 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:39.289 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:39.289 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:39.551 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:39.551 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:39.551 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:39.813 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:40.074 No valid GPT data, bailing 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:39:40.074 00:39:40.074 Discovery Log Number of Records 2, Generation counter 2 00:39:40.074 =====Discovery Log Entry 0====== 00:39:40.074 trtype: tcp 00:39:40.074 adrfam: ipv4 00:39:40.074 subtype: current discovery subsystem 00:39:40.074 treq: not specified, sq flow control disable supported 00:39:40.074 portid: 1 00:39:40.074 trsvcid: 4420 00:39:40.074 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:40.074 traddr: 10.0.0.1 00:39:40.074 eflags: none 00:39:40.074 sectype: none 00:39:40.074 =====Discovery Log Entry 1====== 00:39:40.074 trtype: tcp 00:39:40.074 adrfam: ipv4 00:39:40.074 subtype: nvme subsystem 00:39:40.074 treq: not specified, sq flow control disable supported 00:39:40.074 portid: 1 00:39:40.074 trsvcid: 4420 00:39:40.074 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:40.074 traddr: 10.0.0.1 00:39:40.074 eflags: none 00:39:40.074 sectype: none 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:40.074 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:40.075 11:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:43.384 Initializing NVMe Controllers 00:39:43.385 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:43.385 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:43.385 Initialization complete. Launching workers. 00:39:43.385 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67216, failed: 0 00:39:43.385 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67216, failed to submit 0 00:39:43.385 success 0, unsuccessful 67216, failed 0 00:39:43.385 11:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:43.385 11:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:46.685 Initializing NVMe Controllers 00:39:46.685 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:46.685 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:46.685 Initialization complete. Launching workers. 00:39:46.685 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108177, failed: 0 00:39:46.685 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27206, failed to submit 80971 00:39:46.685 success 0, unsuccessful 27206, failed 0 00:39:46.685 11:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:46.686 11:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:49.986 Initializing NVMe Controllers 00:39:49.986 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:49.986 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:49.986 Initialization complete. Launching workers. 00:39:49.986 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101929, failed: 0 00:39:49.986 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25498, failed to submit 76431 00:39:49.986 success 0, unsuccessful 25498, failed 0 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:49.986 11:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:53.299 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:53.299 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:55.211 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:55.211 00:39:55.211 real 0m20.419s 00:39:55.211 user 0m9.906s 00:39:55.211 sys 0m6.132s 00:39:55.211 11:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:55.211 11:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:55.211 ************************************ 00:39:55.211 END TEST kernel_target_abort 00:39:55.211 ************************************ 00:39:55.211 11:34:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:55.211 11:34:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:55.211 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:55.211 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:55.211 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:55.211 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:55.211 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:55.211 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:55.211 rmmod nvme_tcp 00:39:55.211 rmmod nvme_fabrics 00:39:55.211 rmmod nvme_keyring 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 287644 ']' 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 287644 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 287644 ']' 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 287644 00:39:55.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (287644) - No such process 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 287644 is not found' 00:39:55.471 Process with pid 287644 is not found 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:55.471 11:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:59.683 Waiting for block devices as requested 00:39:59.683 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:59.683 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:59.683 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:59.683 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:59.683 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:59.683 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:59.683 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:59.944 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:59.944 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:59.944 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:00.206 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:00.206 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:00.206 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:00.469 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:00.469 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:00.469 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:00.469 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:01.048 11:34:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.960 11:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:02.960 00:40:02.960 real 0m54.389s 00:40:02.960 user 1m5.757s 00:40:02.960 sys 0m20.453s 00:40:02.960 11:34:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:02.960 11:34:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:02.960 ************************************ 00:40:02.960 END TEST nvmf_abort_qd_sizes 00:40:02.960 ************************************ 00:40:02.960 11:34:11 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:02.960 11:34:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:02.960 11:34:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:02.960 11:34:11 -- common/autotest_common.sh@10 -- # set +x 00:40:02.960 ************************************ 00:40:02.960 START TEST keyring_file 00:40:02.960 ************************************ 00:40:02.960 11:34:11 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:03.221 * Looking for test storage... 00:40:03.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.221 --rc genhtml_branch_coverage=1 00:40:03.221 --rc genhtml_function_coverage=1 00:40:03.221 --rc genhtml_legend=1 00:40:03.221 --rc geninfo_all_blocks=1 00:40:03.221 --rc geninfo_unexecuted_blocks=1 00:40:03.221 00:40:03.221 ' 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.221 --rc genhtml_branch_coverage=1 00:40:03.221 --rc genhtml_function_coverage=1 00:40:03.221 --rc genhtml_legend=1 00:40:03.221 --rc geninfo_all_blocks=1 00:40:03.221 --rc geninfo_unexecuted_blocks=1 00:40:03.221 00:40:03.221 ' 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.221 --rc genhtml_branch_coverage=1 00:40:03.221 --rc genhtml_function_coverage=1 00:40:03.221 --rc genhtml_legend=1 00:40:03.221 --rc geninfo_all_blocks=1 00:40:03.221 --rc geninfo_unexecuted_blocks=1 00:40:03.221 00:40:03.221 ' 00:40:03.221 11:34:11 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.221 --rc genhtml_branch_coverage=1 00:40:03.221 --rc genhtml_function_coverage=1 00:40:03.221 --rc genhtml_legend=1 00:40:03.221 --rc geninfo_all_blocks=1 00:40:03.221 --rc geninfo_unexecuted_blocks=1 00:40:03.221 00:40:03.221 ' 00:40:03.221 11:34:11 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:03.221 11:34:11 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:03.221 11:34:11 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:03.221 11:34:11 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:03.221 11:34:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.221 11:34:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.221 11:34:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.221 11:34:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:03.222 11:34:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:03.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:03.222 11:34:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:03.222 11:34:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:03.222 11:34:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:03.222 11:34:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:03.222 11:34:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:03.222 11:34:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZpdO4u6bWv 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZpdO4u6bWv 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZpdO4u6bWv 00:40:03.222 11:34:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ZpdO4u6bWv 00:40:03.222 11:34:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SHynYODaEB 00:40:03.222 11:34:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:03.222 11:34:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:03.482 11:34:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SHynYODaEB 00:40:03.482 11:34:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SHynYODaEB 00:40:03.482 11:34:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SHynYODaEB 00:40:03.482 11:34:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=298363 00:40:03.482 11:34:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 298363 00:40:03.482 11:34:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:03.482 11:34:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 298363 ']' 00:40:03.482 11:34:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:03.482 11:34:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:03.482 11:34:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:03.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:03.482 11:34:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:03.482 11:34:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:03.482 [2024-11-19 11:34:11.658159] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:40:03.482 [2024-11-19 11:34:11.658218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298363 ] 00:40:03.482 [2024-11-19 11:34:11.739991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:03.482 [2024-11-19 11:34:11.778221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:04.427 11:34:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:04.427 [2024-11-19 11:34:12.457857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:04.427 null0 00:40:04.427 [2024-11-19 11:34:12.489906] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:04.427 [2024-11-19 11:34:12.490215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.427 11:34:12 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:04.427 [2024-11-19 11:34:12.521970] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:04.427 request: 00:40:04.427 { 00:40:04.427 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:04.427 "secure_channel": false, 00:40:04.427 "listen_address": { 00:40:04.427 "trtype": "tcp", 00:40:04.427 "traddr": "127.0.0.1", 00:40:04.427 "trsvcid": "4420" 00:40:04.427 }, 00:40:04.427 "method": "nvmf_subsystem_add_listener", 00:40:04.427 "req_id": 1 00:40:04.427 } 00:40:04.427 Got JSON-RPC error response 00:40:04.427 response: 00:40:04.427 { 00:40:04.427 "code": -32602, 00:40:04.427 "message": "Invalid parameters" 00:40:04.427 } 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:04.427 11:34:12 keyring_file -- keyring/file.sh@47 -- # bperfpid=298446 00:40:04.427 11:34:12 keyring_file -- keyring/file.sh@49 -- # waitforlisten 298446 /var/tmp/bperf.sock 00:40:04.427 11:34:12 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 298446 ']' 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:04.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:04.427 11:34:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:04.427 [2024-11-19 11:34:12.580078] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:40:04.428 [2024-11-19 11:34:12.580125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298446 ] 00:40:04.428 [2024-11-19 11:34:12.672383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.428 [2024-11-19 11:34:12.708591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.056 11:34:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:05.056 11:34:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:05.056 11:34:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZpdO4u6bWv 00:40:05.056 11:34:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZpdO4u6bWv 00:40:05.380 11:34:13 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SHynYODaEB 00:40:05.380 11:34:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SHynYODaEB 00:40:05.380 11:34:13 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:05.380 11:34:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:05.380 11:34:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.380 11:34:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:05.380 11:34:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.680 11:34:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ZpdO4u6bWv == \/\t\m\p\/\t\m\p\.\Z\p\d\O\4\u\6\b\W\v ]] 00:40:05.680 11:34:13 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:05.680 11:34:13 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:05.680 11:34:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.680 11:34:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.680 11:34:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:05.967 11:34:14 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.SHynYODaEB == \/\t\m\p\/\t\m\p\.\S\H\y\n\Y\O\D\a\E\B ]] 00:40:05.967 11:34:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.967 11:34:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:05.967 11:34:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:05.967 11:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:06.253 11:34:14 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:06.253 11:34:14 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:06.253 11:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:06.253 [2024-11-19 11:34:14.571238] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:06.515 nvme0n1 00:40:06.515 11:34:14 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:06.515 11:34:14 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:06.515 11:34:14 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:06.515 11:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:06.777 11:34:15 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:06.777 11:34:15 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:06.777 Running I/O for 1 seconds... 00:40:08.162 16044.00 IOPS, 62.67 MiB/s 00:40:08.162 Latency(us) 00:40:08.162 [2024-11-19T10:34:16.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.162 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:08.162 nvme0n1 : 1.05 15454.27 60.37 0.00 0.00 7987.38 6526.29 47185.92 00:40:08.162 [2024-11-19T10:34:16.514Z] =================================================================================================================== 00:40:08.162 [2024-11-19T10:34:16.514Z] Total : 15454.27 60.37 0.00 0.00 7987.38 6526.29 47185.92 00:40:08.162 { 00:40:08.162 "results": [ 00:40:08.162 { 00:40:08.162 "job": "nvme0n1", 00:40:08.162 "core_mask": "0x2", 00:40:08.162 "workload": "randrw", 00:40:08.162 "percentage": 50, 00:40:08.162 "status": "finished", 00:40:08.162 "queue_depth": 128, 00:40:08.162 "io_size": 4096, 00:40:08.162 "runtime": 1.046507, 00:40:08.162 "iops": 15454.26834220889, 00:40:08.162 "mibps": 60.36823571175348, 00:40:08.162 "io_failed": 0, 00:40:08.162 "io_timeout": 0, 00:40:08.162 "avg_latency_us": 7987.3815041530115, 00:40:08.162 "min_latency_us": 6526.293333333333, 00:40:08.162 "max_latency_us": 47185.92 00:40:08.162 } 00:40:08.162 ], 00:40:08.162 "core_count": 1 00:40:08.162 } 00:40:08.162 11:34:16 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:08.162 11:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:08.162 11:34:16 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:08.162 11:34:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:08.162 11:34:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.162 11:34:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.162 11:34:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:08.162 11:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.423 11:34:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:08.423 11:34:16 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:08.423 11:34:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:08.423 11:34:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.423 11:34:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.423 11:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.423 11:34:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:08.423 11:34:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:08.423 11:34:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:08.423 11:34:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:08.423 11:34:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:08.423 11:34:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:08.423 11:34:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:08.423 11:34:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:08.423 11:34:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:08.424 11:34:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:08.424 11:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:08.684 [2024-11-19 11:34:16.863300] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:08.684 [2024-11-19 11:34:16.864082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a059d0 (107): Transport endpoint is not connected 00:40:08.684 [2024-11-19 11:34:16.865077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a059d0 (9): Bad file descriptor 00:40:08.684 [2024-11-19 11:34:16.866079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:08.684 [2024-11-19 11:34:16.866086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:08.684 [2024-11-19 11:34:16.866091] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:08.684 [2024-11-19 11:34:16.866098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:08.684 request: 00:40:08.684 { 00:40:08.684 "name": "nvme0", 00:40:08.684 "trtype": "tcp", 00:40:08.684 "traddr": "127.0.0.1", 00:40:08.684 "adrfam": "ipv4", 00:40:08.684 "trsvcid": "4420", 00:40:08.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:08.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:08.684 "prchk_reftag": false, 00:40:08.684 "prchk_guard": false, 00:40:08.684 "hdgst": false, 00:40:08.684 "ddgst": false, 00:40:08.684 "psk": "key1", 00:40:08.684 "allow_unrecognized_csi": false, 00:40:08.684 "method": "bdev_nvme_attach_controller", 00:40:08.684 "req_id": 1 00:40:08.684 } 00:40:08.684 Got JSON-RPC error response 00:40:08.684 response: 00:40:08.684 { 00:40:08.684 "code": -5, 00:40:08.684 "message": "Input/output error" 00:40:08.684 } 00:40:08.684 11:34:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:08.684 11:34:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:08.684 11:34:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:08.684 11:34:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:08.684 11:34:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:08.684 11:34:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:08.684 11:34:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.684 11:34:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.684 11:34:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:08.684 11:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.945 11:34:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:08.945 11:34:17 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:08.945 11:34:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:08.945 11:34:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.945 11:34:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.945 11:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.945 11:34:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:08.945 11:34:17 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:08.945 11:34:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:08.945 11:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:09.205 11:34:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:09.205 11:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:09.466 11:34:17 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:09.466 11:34:17 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:09.466 11:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:09.466 11:34:17 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:09.466 11:34:17 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ZpdO4u6bWv 00:40:09.466 11:34:17 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZpdO4u6bWv 00:40:09.466 11:34:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:09.466 11:34:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZpdO4u6bWv 00:40:09.466 11:34:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:09.466 11:34:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:09.466 11:34:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:09.466 11:34:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:09.466 11:34:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZpdO4u6bWv 00:40:09.466 11:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZpdO4u6bWv 00:40:09.727 [2024-11-19 11:34:17.935848] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZpdO4u6bWv': 0100660 00:40:09.727 [2024-11-19 11:34:17.935873] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:09.727 request: 00:40:09.727 { 00:40:09.727 "name": "key0", 00:40:09.727 "path": "/tmp/tmp.ZpdO4u6bWv", 00:40:09.727 "method": "keyring_file_add_key", 00:40:09.727 "req_id": 1 00:40:09.727 } 00:40:09.727 Got JSON-RPC error response 00:40:09.727 response: 00:40:09.727 { 00:40:09.727 "code": -1, 00:40:09.727 "message": "Operation not permitted" 00:40:09.727 } 00:40:09.727 11:34:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:09.727 11:34:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:09.727 11:34:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:09.727 11:34:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:09.727 11:34:17 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ZpdO4u6bWv 00:40:09.727 11:34:17 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZpdO4u6bWv 00:40:09.727 11:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZpdO4u6bWv 00:40:09.989 11:34:18 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ZpdO4u6bWv 00:40:09.989 11:34:18 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:09.989 11:34:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:09.989 11:34:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:09.989 11:34:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:09.989 11:34:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:09.989 11:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:09.989 11:34:18 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:09.989 11:34:18 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:09.989 11:34:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:09.989 11:34:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:09.989 11:34:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:09.989 11:34:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:09.989 11:34:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:09.989 11:34:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:09.989 11:34:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:09.989 11:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:10.250 [2024-11-19 11:34:18.457171] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ZpdO4u6bWv': No such file or directory 00:40:10.250 [2024-11-19 11:34:18.457185] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:10.250 [2024-11-19 11:34:18.457198] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:10.250 [2024-11-19 11:34:18.457203] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:10.250 [2024-11-19 11:34:18.457209] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:10.250 [2024-11-19 11:34:18.457213] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:10.250 request: 00:40:10.250 { 00:40:10.250 "name": "nvme0", 00:40:10.250 "trtype": "tcp", 00:40:10.250 "traddr": "127.0.0.1", 00:40:10.250 "adrfam": "ipv4", 00:40:10.250 "trsvcid": "4420", 00:40:10.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:10.250 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:10.250 "prchk_reftag": false, 00:40:10.250 "prchk_guard": false, 00:40:10.250 "hdgst": false, 00:40:10.250 "ddgst": false, 00:40:10.250 "psk": "key0", 00:40:10.250 "allow_unrecognized_csi": false, 00:40:10.250 "method": "bdev_nvme_attach_controller", 00:40:10.250 "req_id": 1 00:40:10.250 } 00:40:10.250 Got JSON-RPC error response 00:40:10.250 response: 00:40:10.250 { 00:40:10.250 "code": -19, 00:40:10.250 "message": "No such device" 00:40:10.250 } 00:40:10.250 11:34:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:10.250 11:34:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:10.250 11:34:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:10.250 11:34:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:10.250 11:34:18 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:10.250 11:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:10.511 11:34:18 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nIIw7lnQmc 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:10.511 11:34:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:10.511 11:34:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:10.511 11:34:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:10.511 11:34:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:10.511 11:34:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:10.511 11:34:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nIIw7lnQmc 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nIIw7lnQmc 00:40:10.511 11:34:18 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nIIw7lnQmc 00:40:10.511 11:34:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIIw7lnQmc 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nIIw7lnQmc 00:40:10.511 11:34:18 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:10.511 11:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:10.771 nvme0n1 00:40:10.771 11:34:19 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:10.771 11:34:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:10.771 11:34:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:10.771 11:34:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:10.771 11:34:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:10.771 11:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.030 11:34:19 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:11.030 11:34:19 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:11.030 11:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:11.289 11:34:19 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:11.289 11:34:19 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:11.290 11:34:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:11.290 11:34:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:11.290 11:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.290 11:34:19 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:11.290 11:34:19 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:11.290 11:34:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:11.290 11:34:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:11.290 11:34:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:11.290 11:34:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:11.290 11:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.549 11:34:19 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:11.549 11:34:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:11.549 11:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:11.810 11:34:19 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:11.810 11:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.810 11:34:19 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:12.070 11:34:20 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:12.070 11:34:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIIw7lnQmc 00:40:12.070 11:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nIIw7lnQmc 00:40:12.070 11:34:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SHynYODaEB 00:40:12.070 11:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SHynYODaEB 00:40:12.331 11:34:20 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.331 11:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.591 nvme0n1 00:40:12.591 11:34:20 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:12.591 11:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:12.853 11:34:21 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:12.853 "subsystems": [ 00:40:12.853 { 00:40:12.853 "subsystem": "keyring", 00:40:12.853 "config": [ 00:40:12.853 { 00:40:12.853 "method": "keyring_file_add_key", 00:40:12.853 "params": { 00:40:12.853 "name": "key0", 00:40:12.853 "path": "/tmp/tmp.nIIw7lnQmc" 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "keyring_file_add_key", 00:40:12.853 "params": { 00:40:12.853 "name": "key1", 00:40:12.853 "path": "/tmp/tmp.SHynYODaEB" 00:40:12.853 } 00:40:12.853 } 00:40:12.853 ] 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "subsystem": "iobuf", 00:40:12.853 "config": [ 00:40:12.853 { 00:40:12.853 "method": "iobuf_set_options", 00:40:12.853 "params": { 00:40:12.853 "small_pool_count": 8192, 00:40:12.853 "large_pool_count": 1024, 00:40:12.853 "small_bufsize": 8192, 00:40:12.853 "large_bufsize": 135168, 00:40:12.853 "enable_numa": false 00:40:12.853 } 00:40:12.853 } 00:40:12.853 ] 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "subsystem": "sock", 00:40:12.853 "config": [ 00:40:12.853 { 00:40:12.853 "method": "sock_set_default_impl", 00:40:12.853 "params": { 00:40:12.853 "impl_name": "posix" 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "sock_impl_set_options", 00:40:12.853 "params": { 00:40:12.853 "impl_name": "ssl", 00:40:12.853 "recv_buf_size": 4096, 00:40:12.853 "send_buf_size": 4096, 00:40:12.853 "enable_recv_pipe": true, 00:40:12.853 "enable_quickack": false, 00:40:12.853 "enable_placement_id": 0, 00:40:12.853 "enable_zerocopy_send_server": true, 00:40:12.853 "enable_zerocopy_send_client": false, 00:40:12.853 "zerocopy_threshold": 0, 00:40:12.853 "tls_version": 0, 00:40:12.853 "enable_ktls": false 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "sock_impl_set_options", 00:40:12.853 "params": { 00:40:12.853 "impl_name": "posix", 00:40:12.853 "recv_buf_size": 2097152, 00:40:12.853 "send_buf_size": 2097152, 00:40:12.853 "enable_recv_pipe": true, 00:40:12.853 "enable_quickack": false, 00:40:12.853 "enable_placement_id": 0, 00:40:12.853 "enable_zerocopy_send_server": true, 00:40:12.853 "enable_zerocopy_send_client": false, 00:40:12.853 "zerocopy_threshold": 0, 00:40:12.853 "tls_version": 0, 00:40:12.853 "enable_ktls": false 00:40:12.853 } 00:40:12.853 } 00:40:12.853 ] 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "subsystem": "vmd", 00:40:12.853 "config": [] 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "subsystem": "accel", 00:40:12.853 "config": [ 00:40:12.853 { 00:40:12.853 "method": "accel_set_options", 00:40:12.853 "params": { 00:40:12.853 "small_cache_size": 128, 00:40:12.853 "large_cache_size": 16, 00:40:12.853 "task_count": 2048, 00:40:12.853 "sequence_count": 2048, 00:40:12.853 "buf_count": 2048 00:40:12.853 } 00:40:12.853 } 00:40:12.853 ] 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "subsystem": "bdev", 00:40:12.853 "config": [ 00:40:12.853 { 00:40:12.853 "method": "bdev_set_options", 00:40:12.853 "params": { 00:40:12.853 "bdev_io_pool_size": 65535, 00:40:12.853 "bdev_io_cache_size": 256, 00:40:12.853 "bdev_auto_examine": true, 00:40:12.853 "iobuf_small_cache_size": 128, 00:40:12.853 "iobuf_large_cache_size": 16 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "bdev_raid_set_options", 00:40:12.853 "params": { 00:40:12.853 "process_window_size_kb": 1024, 00:40:12.853 "process_max_bandwidth_mb_sec": 0 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "bdev_iscsi_set_options", 00:40:12.853 "params": { 00:40:12.853 "timeout_sec": 30 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "bdev_nvme_set_options", 00:40:12.853 "params": { 00:40:12.853 "action_on_timeout": "none", 00:40:12.853 "timeout_us": 0, 00:40:12.853 "timeout_admin_us": 0, 00:40:12.853 "keep_alive_timeout_ms": 10000, 00:40:12.853 "arbitration_burst": 0, 00:40:12.853 "low_priority_weight": 0, 00:40:12.853 "medium_priority_weight": 0, 00:40:12.853 "high_priority_weight": 0, 00:40:12.853 "nvme_adminq_poll_period_us": 10000, 00:40:12.853 "nvme_ioq_poll_period_us": 0, 00:40:12.853 "io_queue_requests": 512, 00:40:12.853 "delay_cmd_submit": true, 00:40:12.853 "transport_retry_count": 4, 00:40:12.853 "bdev_retry_count": 3, 00:40:12.853 "transport_ack_timeout": 0, 00:40:12.853 "ctrlr_loss_timeout_sec": 0, 00:40:12.853 "reconnect_delay_sec": 0, 00:40:12.853 "fast_io_fail_timeout_sec": 0, 00:40:12.853 "disable_auto_failback": false, 00:40:12.853 "generate_uuids": false, 00:40:12.853 "transport_tos": 0, 00:40:12.853 "nvme_error_stat": false, 00:40:12.853 "rdma_srq_size": 0, 00:40:12.853 "io_path_stat": false, 00:40:12.853 "allow_accel_sequence": false, 00:40:12.853 "rdma_max_cq_size": 0, 00:40:12.853 "rdma_cm_event_timeout_ms": 0, 00:40:12.853 "dhchap_digests": [ 00:40:12.853 "sha256", 00:40:12.853 "sha384", 00:40:12.853 "sha512" 00:40:12.853 ], 00:40:12.853 "dhchap_dhgroups": [ 00:40:12.853 "null", 00:40:12.853 "ffdhe2048", 00:40:12.853 "ffdhe3072", 00:40:12.853 "ffdhe4096", 00:40:12.853 "ffdhe6144", 00:40:12.853 "ffdhe8192" 00:40:12.853 ] 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "bdev_nvme_attach_controller", 00:40:12.853 "params": { 00:40:12.853 "name": "nvme0", 00:40:12.853 "trtype": "TCP", 00:40:12.853 "adrfam": "IPv4", 00:40:12.853 "traddr": "127.0.0.1", 00:40:12.853 "trsvcid": "4420", 00:40:12.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:12.853 "prchk_reftag": false, 00:40:12.853 "prchk_guard": false, 00:40:12.853 "ctrlr_loss_timeout_sec": 0, 00:40:12.853 "reconnect_delay_sec": 0, 00:40:12.853 "fast_io_fail_timeout_sec": 0, 00:40:12.853 "psk": "key0", 00:40:12.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:12.853 "hdgst": false, 00:40:12.853 "ddgst": false, 00:40:12.853 "multipath": "multipath" 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "bdev_nvme_set_hotplug", 00:40:12.853 "params": { 00:40:12.853 "period_us": 100000, 00:40:12.853 "enable": false 00:40:12.853 } 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "method": "bdev_wait_for_examine" 00:40:12.853 } 00:40:12.853 ] 00:40:12.853 }, 00:40:12.853 { 00:40:12.853 "subsystem": "nbd", 00:40:12.853 "config": [] 00:40:12.853 } 00:40:12.853 ] 00:40:12.853 }' 00:40:12.853 11:34:21 keyring_file -- keyring/file.sh@115 -- # killprocess 298446 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 298446 ']' 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@958 -- # kill -0 298446 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298446 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298446' 00:40:12.853 killing process with pid 298446 00:40:12.853 11:34:21 keyring_file -- common/autotest_common.sh@973 -- # kill 298446 00:40:12.853 Received shutdown signal, test time was about 1.000000 seconds 00:40:12.854 00:40:12.854 Latency(us) 00:40:12.854 [2024-11-19T10:34:21.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.854 [2024-11-19T10:34:21.206Z] =================================================================================================================== 00:40:12.854 [2024-11-19T10:34:21.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:12.854 11:34:21 keyring_file -- common/autotest_common.sh@978 -- # wait 298446 00:40:12.854 11:34:21 keyring_file -- keyring/file.sh@118 -- # bperfpid=300194 00:40:12.854 11:34:21 keyring_file -- keyring/file.sh@120 -- # waitforlisten 300194 /var/tmp/bperf.sock 00:40:12.854 11:34:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 300194 ']' 00:40:12.854 11:34:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:12.854 11:34:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:12.854 11:34:21 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:12.854 11:34:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:12.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:12.854 11:34:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:12.854 11:34:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:12.854 11:34:21 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:12.854 "subsystems": [ 00:40:12.854 { 00:40:12.854 "subsystem": "keyring", 00:40:12.854 "config": [ 00:40:12.854 { 00:40:12.854 "method": "keyring_file_add_key", 00:40:12.854 "params": { 00:40:12.854 "name": "key0", 00:40:12.854 "path": "/tmp/tmp.nIIw7lnQmc" 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "keyring_file_add_key", 00:40:12.854 "params": { 00:40:12.854 "name": "key1", 00:40:12.854 "path": "/tmp/tmp.SHynYODaEB" 00:40:12.854 } 00:40:12.854 } 00:40:12.854 ] 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "subsystem": "iobuf", 00:40:12.854 "config": [ 00:40:12.854 { 00:40:12.854 "method": "iobuf_set_options", 00:40:12.854 "params": { 00:40:12.854 "small_pool_count": 8192, 00:40:12.854 "large_pool_count": 1024, 00:40:12.854 "small_bufsize": 8192, 00:40:12.854 "large_bufsize": 135168, 00:40:12.854 "enable_numa": false 00:40:12.854 } 00:40:12.854 } 00:40:12.854 ] 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "subsystem": "sock", 00:40:12.854 "config": [ 00:40:12.854 { 00:40:12.854 "method": "sock_set_default_impl", 00:40:12.854 "params": { 00:40:12.854 "impl_name": "posix" 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "sock_impl_set_options", 00:40:12.854 "params": { 00:40:12.854 "impl_name": "ssl", 00:40:12.854 "recv_buf_size": 4096, 00:40:12.854 "send_buf_size": 4096, 00:40:12.854 "enable_recv_pipe": true, 00:40:12.854 "enable_quickack": false, 00:40:12.854 "enable_placement_id": 0, 00:40:12.854 "enable_zerocopy_send_server": true, 00:40:12.854 "enable_zerocopy_send_client": false, 00:40:12.854 "zerocopy_threshold": 0, 00:40:12.854 "tls_version": 0, 00:40:12.854 "enable_ktls": false 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "sock_impl_set_options", 00:40:12.854 "params": { 00:40:12.854 "impl_name": "posix", 00:40:12.854 "recv_buf_size": 2097152, 00:40:12.854 "send_buf_size": 2097152, 00:40:12.854 "enable_recv_pipe": true, 00:40:12.854 "enable_quickack": false, 00:40:12.854 "enable_placement_id": 0, 00:40:12.854 "enable_zerocopy_send_server": true, 00:40:12.854 "enable_zerocopy_send_client": false, 00:40:12.854 "zerocopy_threshold": 0, 00:40:12.854 "tls_version": 0, 00:40:12.854 "enable_ktls": false 00:40:12.854 } 00:40:12.854 } 00:40:12.854 ] 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "subsystem": "vmd", 00:40:12.854 "config": [] 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "subsystem": "accel", 00:40:12.854 "config": [ 00:40:12.854 { 00:40:12.854 "method": "accel_set_options", 00:40:12.854 "params": { 00:40:12.854 "small_cache_size": 128, 00:40:12.854 "large_cache_size": 16, 00:40:12.854 "task_count": 2048, 00:40:12.854 "sequence_count": 2048, 00:40:12.854 "buf_count": 2048 00:40:12.854 } 00:40:12.854 } 00:40:12.854 ] 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "subsystem": "bdev", 00:40:12.854 "config": [ 00:40:12.854 { 00:40:12.854 "method": "bdev_set_options", 00:40:12.854 "params": { 00:40:12.854 "bdev_io_pool_size": 65535, 00:40:12.854 "bdev_io_cache_size": 256, 00:40:12.854 "bdev_auto_examine": true, 00:40:12.854 "iobuf_small_cache_size": 128, 00:40:12.854 "iobuf_large_cache_size": 16 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "bdev_raid_set_options", 00:40:12.854 "params": { 00:40:12.854 "process_window_size_kb": 1024, 00:40:12.854 "process_max_bandwidth_mb_sec": 0 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "bdev_iscsi_set_options", 00:40:12.854 "params": { 00:40:12.854 "timeout_sec": 30 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "bdev_nvme_set_options", 00:40:12.854 "params": { 00:40:12.854 "action_on_timeout": "none", 00:40:12.854 "timeout_us": 0, 00:40:12.854 "timeout_admin_us": 0, 00:40:12.854 "keep_alive_timeout_ms": 10000, 00:40:12.854 "arbitration_burst": 0, 00:40:12.854 "low_priority_weight": 0, 00:40:12.854 "medium_priority_weight": 0, 00:40:12.854 "high_priority_weight": 0, 00:40:12.854 "nvme_adminq_poll_period_us": 10000, 00:40:12.854 "nvme_ioq_poll_period_us": 0, 00:40:12.854 "io_queue_requests": 512, 00:40:12.854 "delay_cmd_submit": true, 00:40:12.854 "transport_retry_count": 4, 00:40:12.854 "bdev_retry_count": 3, 00:40:12.854 "transport_ack_timeout": 0, 00:40:12.854 "ctrlr_loss_timeout_sec": 0, 00:40:12.854 "reconnect_delay_sec": 0, 00:40:12.854 "fast_io_fail_timeout_sec": 0, 00:40:12.854 "disable_auto_failback": false, 00:40:12.854 "generate_uuids": false, 00:40:12.854 "transport_tos": 0, 00:40:12.854 "nvme_error_stat": false, 00:40:12.854 "rdma_srq_size": 0, 00:40:12.854 "io_path_stat": false, 00:40:12.854 "allow_accel_sequence": false, 00:40:12.854 "rdma_max_cq_size": 0, 00:40:12.854 "rdma_cm_event_timeout_ms": 0, 00:40:12.854 "dhchap_digests": [ 00:40:12.854 "sha256", 00:40:12.854 "sha384", 00:40:12.854 "sha512" 00:40:12.854 ], 00:40:12.854 "dhchap_dhgroups": [ 00:40:12.854 "null", 00:40:12.854 "ffdhe2048", 00:40:12.854 "ffdhe3072", 00:40:12.854 "ffdhe4096", 00:40:12.854 "ffdhe6144", 00:40:12.854 "ffdhe8192" 00:40:12.854 ] 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "bdev_nvme_attach_controller", 00:40:12.854 "params": { 00:40:12.854 "name": "nvme0", 00:40:12.854 "trtype": "TCP", 00:40:12.854 "adrfam": "IPv4", 00:40:12.854 "traddr": "127.0.0.1", 00:40:12.854 "trsvcid": "4420", 00:40:12.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:12.854 "prchk_reftag": false, 00:40:12.854 "prchk_guard": false, 00:40:12.854 "ctrlr_loss_timeout_sec": 0, 00:40:12.854 "reconnect_delay_sec": 0, 00:40:12.854 "fast_io_fail_timeout_sec": 0, 00:40:12.854 "psk": "key0", 00:40:12.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:12.854 "hdgst": false, 00:40:12.854 "ddgst": false, 00:40:12.854 "multipath": "multipath" 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "bdev_nvme_set_hotplug", 00:40:12.854 "params": { 00:40:12.854 "period_us": 100000, 00:40:12.854 "enable": false 00:40:12.854 } 00:40:12.854 }, 00:40:12.854 { 00:40:12.854 "method": "bdev_wait_for_examine" 00:40:12.854 } 00:40:12.854 ] 00:40:12.855 }, 00:40:12.855 { 00:40:12.855 "subsystem": "nbd", 00:40:12.855 "config": [] 00:40:12.855 } 00:40:12.855 ] 00:40:12.855 }' 00:40:13.115 [2024-11-19 11:34:21.220359] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:40:13.115 [2024-11-19 11:34:21.220418] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300194 ] 00:40:13.115 [2024-11-19 11:34:21.307688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.115 [2024-11-19 11:34:21.337432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.376 [2024-11-19 11:34:21.480420] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:13.637 11:34:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:13.637 11:34:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:13.898 11:34:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:13.898 11:34:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:13.898 11:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.898 11:34:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:13.898 11:34:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:13.898 11:34:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.898 11:34:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:13.898 11:34:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.898 11:34:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:13.898 11:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:14.159 11:34:22 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:14.159 11:34:22 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:14.159 11:34:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:14.159 11:34:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:14.159 11:34:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:14.159 11:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:14.159 11:34:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:14.419 11:34:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:14.419 11:34:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:14.419 11:34:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:14.419 11:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:14.419 11:34:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:14.419 11:34:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:14.419 11:34:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nIIw7lnQmc /tmp/tmp.SHynYODaEB 00:40:14.419 11:34:22 keyring_file -- keyring/file.sh@20 -- # killprocess 300194 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 300194 ']' 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 300194 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 300194 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 300194' 00:40:14.419 killing process with pid 300194 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@973 -- # kill 300194 00:40:14.419 Received shutdown signal, test time was about 1.000000 seconds 00:40:14.419 00:40:14.419 Latency(us) 00:40:14.419 [2024-11-19T10:34:22.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:14.419 [2024-11-19T10:34:22.771Z] =================================================================================================================== 00:40:14.419 [2024-11-19T10:34:22.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:14.419 11:34:22 keyring_file -- common/autotest_common.sh@978 -- # wait 300194 00:40:14.679 11:34:22 keyring_file -- keyring/file.sh@21 -- # killprocess 298363 00:40:14.679 11:34:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 298363 ']' 00:40:14.679 11:34:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 298363 00:40:14.679 11:34:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:14.679 11:34:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:14.680 11:34:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298363 00:40:14.680 11:34:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:14.680 11:34:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:14.680 11:34:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298363' 00:40:14.680 killing process with pid 298363 00:40:14.680 11:34:22 keyring_file -- common/autotest_common.sh@973 -- # kill 298363 00:40:14.680 11:34:22 keyring_file -- common/autotest_common.sh@978 -- # wait 298363 00:40:14.941 00:40:14.941 real 0m11.895s 00:40:14.941 user 0m28.545s 00:40:14.941 sys 0m2.671s 00:40:14.941 11:34:23 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:14.941 11:34:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:14.941 ************************************ 00:40:14.941 END TEST keyring_file 00:40:14.941 ************************************ 00:40:14.941 11:34:23 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:40:14.941 11:34:23 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:14.941 11:34:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:14.941 11:34:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:14.941 11:34:23 -- common/autotest_common.sh@10 -- # set +x 00:40:14.941 ************************************ 00:40:14.941 START TEST keyring_linux 00:40:14.941 ************************************ 00:40:14.941 11:34:23 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:14.941 Joined session keyring: 261399068 00:40:15.202 * Looking for test storage... 00:40:15.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:15.202 11:34:23 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:15.202 11:34:23 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:40:15.202 11:34:23 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:15.202 11:34:23 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.202 11:34:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:15.203 11:34:23 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.203 11:34:23 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:15.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.203 --rc genhtml_branch_coverage=1 00:40:15.203 --rc genhtml_function_coverage=1 00:40:15.203 --rc genhtml_legend=1 00:40:15.203 --rc geninfo_all_blocks=1 00:40:15.203 --rc geninfo_unexecuted_blocks=1 00:40:15.203 00:40:15.203 ' 00:40:15.203 11:34:23 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:15.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.203 --rc genhtml_branch_coverage=1 00:40:15.203 --rc genhtml_function_coverage=1 00:40:15.203 --rc genhtml_legend=1 00:40:15.203 --rc geninfo_all_blocks=1 00:40:15.203 --rc geninfo_unexecuted_blocks=1 00:40:15.203 00:40:15.203 ' 00:40:15.203 11:34:23 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:15.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.203 --rc genhtml_branch_coverage=1 00:40:15.203 --rc genhtml_function_coverage=1 00:40:15.203 --rc genhtml_legend=1 00:40:15.203 --rc geninfo_all_blocks=1 00:40:15.203 --rc geninfo_unexecuted_blocks=1 00:40:15.203 00:40:15.203 ' 00:40:15.203 11:34:23 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:15.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.203 --rc genhtml_branch_coverage=1 00:40:15.203 --rc genhtml_function_coverage=1 00:40:15.203 --rc genhtml_legend=1 00:40:15.203 --rc geninfo_all_blocks=1 00:40:15.203 --rc geninfo_unexecuted_blocks=1 00:40:15.203 00:40:15.203 ' 00:40:15.203 11:34:23 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.203 11:34:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.203 11:34:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.203 11:34:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.203 11:34:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.203 11:34:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.203 11:34:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.203 11:34:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.203 11:34:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:15.203 11:34:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:15.203 11:34:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:15.203 11:34:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:15.203 11:34:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:15.203 11:34:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:15.203 11:34:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:15.203 11:34:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:15.203 /tmp/:spdk-test:key0 00:40:15.203 11:34:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:15.203 11:34:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:15.203 11:34:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:15.464 11:34:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:15.464 11:34:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:15.464 /tmp/:spdk-test:key1 00:40:15.464 11:34:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=300758 00:40:15.464 11:34:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 300758 00:40:15.464 11:34:23 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:15.464 11:34:23 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 300758 ']' 00:40:15.464 11:34:23 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.464 11:34:23 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:15.464 11:34:23 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.464 11:34:23 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:15.464 11:34:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:15.464 [2024-11-19 11:34:23.625147] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:40:15.464 [2024-11-19 11:34:23.625218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300758 ] 00:40:15.464 [2024-11-19 11:34:23.705977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.464 [2024-11-19 11:34:23.743255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:16.407 11:34:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:16.407 [2024-11-19 11:34:24.434715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:16.407 null0 00:40:16.407 [2024-11-19 11:34:24.466758] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:16.407 [2024-11-19 11:34:24.467096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.407 11:34:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:16.407 1056838054 00:40:16.407 11:34:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:16.407 75765703 00:40:16.407 11:34:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=300965 00:40:16.407 11:34:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 300965 /var/tmp/bperf.sock 00:40:16.407 11:34:24 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 300965 ']' 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:16.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:16.407 [2024-11-19 11:34:24.542548] Starting SPDK v25.01-pre git sha1 029355612 / DPDK 24.03.0 initialization... 00:40:16.407 [2024-11-19 11:34:24.542597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300965 ] 00:40:16.407 [2024-11-19 11:34:24.606570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.407 [2024-11-19 11:34:24.636705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:16.407 11:34:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:16.407 11:34:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:16.407 11:34:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:16.668 11:34:24 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:16.668 11:34:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:16.929 11:34:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:16.929 11:34:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:16.929 [2024-11-19 11:34:25.211607] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:17.190 nvme0n1 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:17.190 11:34:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:17.190 11:34:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:17.190 11:34:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:17.190 11:34:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:17.190 11:34:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.450 11:34:25 keyring_linux -- keyring/linux.sh@25 -- # sn=1056838054 00:40:17.450 11:34:25 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:17.450 11:34:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:17.450 11:34:25 keyring_linux -- keyring/linux.sh@26 -- # [[ 1056838054 == \1\0\5\6\8\3\8\0\5\4 ]] 00:40:17.450 11:34:25 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1056838054 00:40:17.450 11:34:25 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:17.450 11:34:25 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:17.450 Running I/O for 1 seconds... 00:40:18.834 16171.00 IOPS, 63.17 MiB/s 00:40:18.834 Latency(us) 00:40:18.834 [2024-11-19T10:34:27.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.834 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:18.834 nvme0n1 : 1.01 16172.40 63.17 0.00 0.00 7880.47 6908.59 16602.45 00:40:18.834 [2024-11-19T10:34:27.186Z] =================================================================================================================== 00:40:18.834 [2024-11-19T10:34:27.186Z] Total : 16172.40 63.17 0.00 0.00 7880.47 6908.59 16602.45 00:40:18.834 { 00:40:18.834 "results": [ 00:40:18.834 { 00:40:18.834 "job": "nvme0n1", 00:40:18.834 "core_mask": "0x2", 00:40:18.834 "workload": "randread", 00:40:18.834 "status": "finished", 00:40:18.834 "queue_depth": 128, 00:40:18.834 "io_size": 4096, 00:40:18.834 "runtime": 1.007828, 00:40:18.834 "iops": 16172.402433748615, 00:40:18.834 "mibps": 63.17344700683053, 00:40:18.834 "io_failed": 0, 00:40:18.834 "io_timeout": 0, 00:40:18.834 "avg_latency_us": 7880.47296807575, 00:40:18.834 "min_latency_us": 6908.586666666667, 00:40:18.834 "max_latency_us": 16602.453333333335 00:40:18.834 } 00:40:18.834 ], 00:40:18.834 "core_count": 1 00:40:18.834 } 00:40:18.834 11:34:26 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:18.834 11:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:18.834 11:34:26 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:18.834 11:34:26 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:18.834 11:34:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:18.834 11:34:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:18.834 11:34:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:18.834 11:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:18.834 11:34:27 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:18.834 11:34:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:18.834 11:34:27 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:18.834 11:34:27 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:18.834 11:34:27 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:40:18.834 11:34:27 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:18.834 11:34:27 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:18.834 11:34:27 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:18.834 11:34:27 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:18.834 11:34:27 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:18.834 11:34:27 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:18.834 11:34:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:19.095 [2024-11-19 11:34:27.315822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:19.095 [2024-11-19 11:34:27.316577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87f270 (107): Transport endpoint is not connected 00:40:19.095 [2024-11-19 11:34:27.317573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87f270 (9): Bad file descriptor 00:40:19.095 [2024-11-19 11:34:27.318575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:19.095 [2024-11-19 11:34:27.318582] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:19.095 [2024-11-19 11:34:27.318587] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:19.095 [2024-11-19 11:34:27.318594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:19.095 request: 00:40:19.095 { 00:40:19.095 "name": "nvme0", 00:40:19.095 "trtype": "tcp", 00:40:19.095 "traddr": "127.0.0.1", 00:40:19.095 "adrfam": "ipv4", 00:40:19.096 "trsvcid": "4420", 00:40:19.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:19.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:19.096 "prchk_reftag": false, 00:40:19.096 "prchk_guard": false, 00:40:19.096 "hdgst": false, 00:40:19.096 "ddgst": false, 00:40:19.096 "psk": ":spdk-test:key1", 00:40:19.096 "allow_unrecognized_csi": false, 00:40:19.096 "method": "bdev_nvme_attach_controller", 00:40:19.096 "req_id": 1 00:40:19.096 } 00:40:19.096 Got JSON-RPC error response 00:40:19.096 response: 00:40:19.096 { 00:40:19.096 "code": -5, 00:40:19.096 "message": "Input/output error" 00:40:19.096 } 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@33 -- # sn=1056838054 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1056838054 00:40:19.096 1 links removed 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@33 -- # sn=75765703 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 75765703 00:40:19.096 1 links removed 00:40:19.096 11:34:27 keyring_linux -- keyring/linux.sh@41 -- # killprocess 300965 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 300965 ']' 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 300965 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 300965 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 300965' 00:40:19.096 killing process with pid 300965 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@973 -- # kill 300965 00:40:19.096 Received shutdown signal, test time was about 1.000000 seconds 00:40:19.096 00:40:19.096 Latency(us) 00:40:19.096 [2024-11-19T10:34:27.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:19.096 [2024-11-19T10:34:27.448Z] =================================================================================================================== 00:40:19.096 [2024-11-19T10:34:27.448Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:19.096 11:34:27 keyring_linux -- common/autotest_common.sh@978 -- # wait 300965 00:40:19.356 11:34:27 keyring_linux -- keyring/linux.sh@42 -- # killprocess 300758 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 300758 ']' 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 300758 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 300758 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 300758' 00:40:19.356 killing process with pid 300758 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@973 -- # kill 300758 00:40:19.356 11:34:27 keyring_linux -- common/autotest_common.sh@978 -- # wait 300758 00:40:19.616 00:40:19.616 real 0m4.570s 00:40:19.616 user 0m8.232s 00:40:19.616 sys 0m1.365s 00:40:19.616 11:34:27 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:19.616 11:34:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:19.616 ************************************ 00:40:19.616 END TEST keyring_linux 00:40:19.616 ************************************ 00:40:19.616 11:34:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:19.616 11:34:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:19.616 11:34:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:19.616 11:34:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:19.616 11:34:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:19.616 11:34:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:19.616 11:34:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:19.617 11:34:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:19.617 11:34:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:19.617 11:34:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:19.617 11:34:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:19.617 11:34:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:19.617 11:34:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:19.617 11:34:27 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:19.617 11:34:27 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:19.617 11:34:27 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:19.617 11:34:27 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:19.617 11:34:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:19.617 11:34:27 -- common/autotest_common.sh@10 -- # set +x 00:40:19.617 11:34:27 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:19.617 11:34:27 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:19.617 11:34:27 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:19.617 11:34:27 -- common/autotest_common.sh@10 -- # set +x 00:40:27.761 INFO: APP EXITING 00:40:27.761 INFO: killing all VMs 00:40:27.761 INFO: killing vhost app 00:40:27.761 INFO: EXIT DONE 00:40:31.065 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:31.065 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:31.065 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:31.065 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:31.065 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:31.065 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:31.065 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:31.065 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:31.326 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:31.326 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:31.326 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:31.326 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:31.326 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:31.326 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:31.326 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:31.326 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:31.326 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:35.535 Cleaning 00:40:35.535 Removing: /var/run/dpdk/spdk0/config 00:40:35.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:35.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:35.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:35.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:35.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:35.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:35.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:35.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:35.797 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:35.797 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:35.797 Removing: /var/run/dpdk/spdk1/config 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:35.797 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:35.797 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:35.797 Removing: /var/run/dpdk/spdk2/config 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:35.797 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:35.797 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:35.797 Removing: /var/run/dpdk/spdk3/config 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:35.797 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:35.797 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:35.797 Removing: /var/run/dpdk/spdk4/config 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:35.798 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:35.798 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:35.798 Removing: /dev/shm/bdev_svc_trace.1 00:40:35.798 Removing: /dev/shm/nvmf_trace.0 00:40:36.059 Removing: /dev/shm/spdk_tgt_trace.pid3878494 00:40:36.059 Removing: /var/run/dpdk/spdk0 00:40:36.059 Removing: /var/run/dpdk/spdk1 00:40:36.059 Removing: /var/run/dpdk/spdk2 00:40:36.059 Removing: /var/run/dpdk/spdk3 00:40:36.059 Removing: /var/run/dpdk/spdk4 00:40:36.059 Removing: /var/run/dpdk/spdk_pid101216 00:40:36.059 Removing: /var/run/dpdk/spdk_pid101218 00:40:36.059 Removing: /var/run/dpdk/spdk_pid11315 00:40:36.059 Removing: /var/run/dpdk/spdk_pid126871 00:40:36.059 Removing: /var/run/dpdk/spdk_pid127485 00:40:36.059 Removing: /var/run/dpdk/spdk_pid128188 00:40:36.059 Removing: /var/run/dpdk/spdk_pid129024 00:40:36.059 Removing: /var/run/dpdk/spdk_pid12986 00:40:36.059 Removing: /var/run/dpdk/spdk_pid129998 00:40:36.059 Removing: /var/run/dpdk/spdk_pid130789 00:40:36.059 Removing: /var/run/dpdk/spdk_pid131614 00:40:36.059 Removing: /var/run/dpdk/spdk_pid132299 00:40:36.059 Removing: /var/run/dpdk/spdk_pid138030 00:40:36.059 Removing: /var/run/dpdk/spdk_pid138283 00:40:36.059 Removing: /var/run/dpdk/spdk_pid14546 00:40:36.059 Removing: /var/run/dpdk/spdk_pid145860 00:40:36.059 Removing: /var/run/dpdk/spdk_pid146157 00:40:36.059 Removing: /var/run/dpdk/spdk_pid153292 00:40:36.059 Removing: /var/run/dpdk/spdk_pid158873 00:40:36.059 Removing: /var/run/dpdk/spdk_pid16368 00:40:36.059 Removing: /var/run/dpdk/spdk_pid171020 00:40:36.059 Removing: /var/run/dpdk/spdk_pid171761 00:40:36.059 Removing: /var/run/dpdk/spdk_pid177680 00:40:36.059 Removing: /var/run/dpdk/spdk_pid178059 00:40:36.059 Removing: /var/run/dpdk/spdk_pid183744 00:40:36.059 Removing: /var/run/dpdk/spdk_pid191097 00:40:36.059 Removing: /var/run/dpdk/spdk_pid194125 00:40:36.059 Removing: /var/run/dpdk/spdk_pid207297 00:40:36.059 Removing: /var/run/dpdk/spdk_pid218823 00:40:36.059 Removing: /var/run/dpdk/spdk_pid220831 00:40:36.059 Removing: /var/run/dpdk/spdk_pid221843 00:40:36.059 Removing: /var/run/dpdk/spdk_pid22425 00:40:36.059 Removing: /var/run/dpdk/spdk_pid243518 00:40:36.059 Removing: /var/run/dpdk/spdk_pid248759 00:40:36.059 Removing: /var/run/dpdk/spdk_pid251975 00:40:36.059 Removing: /var/run/dpdk/spdk_pid259856 00:40:36.059 Removing: /var/run/dpdk/spdk_pid259947 00:40:36.059 Removing: /var/run/dpdk/spdk_pid266551 00:40:36.059 Removing: /var/run/dpdk/spdk_pid268976 00:40:36.059 Removing: /var/run/dpdk/spdk_pid2711 00:40:36.059 Removing: /var/run/dpdk/spdk_pid271185 00:40:36.059 Removing: /var/run/dpdk/spdk_pid272677 00:40:36.059 Removing: /var/run/dpdk/spdk_pid275000 00:40:36.059 Removing: /var/run/dpdk/spdk_pid276402 00:40:36.059 Removing: /var/run/dpdk/spdk_pid28226 00:40:36.059 Removing: /var/run/dpdk/spdk_pid287837 00:40:36.059 Removing: /var/run/dpdk/spdk_pid288497 00:40:36.059 Removing: /var/run/dpdk/spdk_pid289159 00:40:36.059 Removing: /var/run/dpdk/spdk_pid292218 00:40:36.059 Removing: /var/run/dpdk/spdk_pid292671 00:40:36.321 Removing: /var/run/dpdk/spdk_pid293238 00:40:36.321 Removing: /var/run/dpdk/spdk_pid298363 00:40:36.321 Removing: /var/run/dpdk/spdk_pid298446 00:40:36.321 Removing: /var/run/dpdk/spdk_pid300194 00:40:36.321 Removing: /var/run/dpdk/spdk_pid300758 00:40:36.321 Removing: /var/run/dpdk/spdk_pid300965 00:40:36.321 Removing: /var/run/dpdk/spdk_pid33943 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3876827 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3878494 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3879165 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3880222 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3880546 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3881738 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3881947 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3882352 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3883352 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3884021 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3884414 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3884813 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3885228 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3885626 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3885986 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3886130 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3886426 00:40:36.321 Removing: /var/run/dpdk/spdk_pid3887482 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3891059 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3891482 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3891897 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3891910 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3892443 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3892622 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3893086 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3893589 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3893989 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3894163 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3894467 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3894529 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3895053 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3895339 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3895734 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3900937 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3906817 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3919450 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3920145 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3925907 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3926367 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3932165 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3939752 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3942863 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3957043 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3969066 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3971105 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3972332 00:40:36.322 Removing: /var/run/dpdk/spdk_pid3994830 00:40:36.322 Removing: /var/run/dpdk/spdk_pid4000264 00:40:36.322 Removing: /var/run/dpdk/spdk_pid4060590 00:40:36.322 Removing: /var/run/dpdk/spdk_pid4068076 00:40:36.322 Removing: /var/run/dpdk/spdk_pid4075732 00:40:36.322 Removing: /var/run/dpdk/spdk_pid4084000 00:40:36.322 Removing: /var/run/dpdk/spdk_pid4084009 00:40:36.322 Removing: /var/run/dpdk/spdk_pid4085027 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4086042 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4087097 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4087732 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4087898 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4088127 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4088355 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4088358 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4089363 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4090366 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4091378 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4092053 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4092067 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4092396 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4093829 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4095237 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4105903 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4142720 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4148592 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4150610 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4153263 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4153405 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4153483 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4153753 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4154335 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4156482 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4157569 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4157951 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4160651 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4161352 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4162065 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4167584 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4174869 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4174870 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4174871 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4180117 00:40:36.583 Removing: /var/run/dpdk/spdk_pid4191220 00:40:36.583 Removing: /var/run/dpdk/spdk_pid44097 00:40:36.583 Removing: /var/run/dpdk/spdk_pid44104 00:40:36.583 Removing: /var/run/dpdk/spdk_pid49814 00:40:36.583 Removing: /var/run/dpdk/spdk_pid49963 00:40:36.583 Removing: /var/run/dpdk/spdk_pid50177 00:40:36.583 Removing: /var/run/dpdk/spdk_pid50829 00:40:36.583 Removing: /var/run/dpdk/spdk_pid50843 00:40:36.583 Removing: /var/run/dpdk/spdk_pid56900 00:40:36.583 Removing: /var/run/dpdk/spdk_pid57565 00:40:36.583 Removing: /var/run/dpdk/spdk_pid63265 00:40:36.583 Removing: /var/run/dpdk/spdk_pid66557 00:40:36.583 Removing: /var/run/dpdk/spdk_pid73917 00:40:36.583 Removing: /var/run/dpdk/spdk_pid81165 00:40:36.583 Removing: /var/run/dpdk/spdk_pid91738 00:40:36.583 Clean 00:40:36.844 11:34:44 -- common/autotest_common.sh@1453 -- # return 0 00:40:36.844 11:34:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:36.844 11:34:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:36.844 11:34:44 -- common/autotest_common.sh@10 -- # set +x 00:40:36.844 11:34:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:36.845 11:34:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:36.845 11:34:45 -- common/autotest_common.sh@10 -- # set +x 00:40:36.845 11:34:45 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:36.845 11:34:45 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:36.845 11:34:45 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:36.845 11:34:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:36.845 11:34:45 -- spdk/autotest.sh@398 -- # hostname 00:40:36.845 11:34:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:37.106 geninfo: WARNING: invalid characters removed from testname! 00:41:03.695 11:35:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:05.615 11:35:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:07.075 11:35:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:08.987 11:35:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:10.372 11:35:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:12.288 11:35:20 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:13.673 11:35:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:13.673 11:35:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:41:13.673 11:35:21 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:41:13.673 11:35:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:13.673 11:35:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:13.673 11:35:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:13.673 + [[ -n 3791108 ]] 00:41:13.673 + sudo kill 3791108 00:41:13.685 [Pipeline] } 00:41:13.704 [Pipeline] // stage 00:41:13.710 [Pipeline] } 00:41:13.727 [Pipeline] // timeout 00:41:13.732 [Pipeline] } 00:41:13.748 [Pipeline] // catchError 00:41:13.753 [Pipeline] } 00:41:13.769 [Pipeline] // wrap 00:41:13.775 [Pipeline] } 00:41:13.789 [Pipeline] // catchError 00:41:13.798 [Pipeline] stage 00:41:13.801 [Pipeline] { (Epilogue) 00:41:13.815 [Pipeline] catchError 00:41:13.817 [Pipeline] { 00:41:13.831 [Pipeline] echo 00:41:13.833 Cleanup processes 00:41:13.841 [Pipeline] sh 00:41:14.133 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:14.134 314341 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:14.149 [Pipeline] sh 00:41:14.438 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:14.438 ++ grep -v 'sudo pgrep' 00:41:14.438 ++ awk '{print $1}' 00:41:14.438 + sudo kill -9 00:41:14.438 + true 00:41:14.451 [Pipeline] sh 00:41:14.740 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:26.991 [Pipeline] sh 00:41:27.279 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:27.279 Artifacts sizes are good 00:41:27.295 [Pipeline] archiveArtifacts 00:41:27.302 Archiving artifacts 00:41:27.441 [Pipeline] sh 00:41:27.723 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:27.739 [Pipeline] cleanWs 00:41:27.749 [WS-CLEANUP] Deleting project workspace... 00:41:27.750 [WS-CLEANUP] Deferred wipeout is used... 00:41:27.756 [WS-CLEANUP] done 00:41:27.758 [Pipeline] } 00:41:27.776 [Pipeline] // catchError 00:41:27.788 [Pipeline] sh 00:41:28.078 + logger -p user.info -t JENKINS-CI 00:41:28.089 [Pipeline] } 00:41:28.104 [Pipeline] // stage 00:41:28.110 [Pipeline] } 00:41:28.126 [Pipeline] // node 00:41:28.132 [Pipeline] End of Pipeline 00:41:28.175 Finished: SUCCESS